14,057 research outputs found
Massively-Parallel Feature Selection for Big Data
We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for
feature selection (FS) in Big Data settings (high dimensionality and/or sample
size). To tackle the challenges of Big Data FS PFBP partitions the data matrix
both in terms of rows (samples, training examples) as well as columns
(features). By employing the concepts of -values of conditional independence
tests and meta-analysis techniques PFBP manages to rely only on computations
local to a partition while minimizing communication costs. Then, it employs
powerful and safe (asymptotically sound) heuristics to make early, approximate
decisions, such as Early Dropping of features from consideration in subsequent
iterations, Early Stopping of consideration of features within the same
iteration, or Early Return of the winner in each iteration. PFBP provides
asymptotic guarantees of optimality for data distributions faithfully
representable by a causal network (Bayesian network or maximal ancestral
graph). Our empirical analysis confirms a super-linear speedup of the algorithm
with increasing sample size, linear scalability with respect to the number of
features and processing cores, while dominating other competitive algorithms in
its class
On Consistency of Graph-based Semi-supervised Learning
Graph-based semi-supervised learning is one of the most popular methods in
machine learning. Some of its theoretical properties such as bounds for the
generalization error and the convergence of the graph Laplacian regularizer
have been studied in computer science and statistics literatures. However, a
fundamental statistical property, the consistency of the estimator from this
method has not been proved. In this article, we study the consistency problem
under a non-parametric framework. We prove the consistency of graph-based
learning in the case that the estimated scores are enforced to be equal to the
observed responses for the labeled data. The sample sizes of both labeled and
unlabeled data are allowed to grow in this result. When the estimated scores
are not required to be equal to the observed responses, a tuning parameter is
used to balance the loss function and the graph Laplacian regularizer. We give
a counterexample demonstrating that the estimator for this case can be
inconsistent. The theoretical findings are supported by numerical studies.Comment: This paper is accepted by 2019 IEEE 39th International Conference on
Distributed Computing Systems (ICDCS
The Loss Rank Criterion for Variable Selection in Linear Regression Analysis
Lasso and other regularization procedures are attractive methods for variable
selection, subject to a proper choice of shrinkage parameter. Given a set of
potential subsets produced by a regularization algorithm, a consistent model
selection criterion is proposed to select the best one among this preselected
set. The approach leads to a fast and efficient procedure for variable
selection, especially in high-dimensional settings. Model selection consistency
of the suggested criterion is proven when the number of covariates d is fixed.
Simulation studies suggest that the criterion still enjoys model selection
consistency when d is much larger than the sample size. The simulations also
show that our approach for variable selection works surprisingly well in
comparison with existing competitors. The method is also applied to a real data
set.Comment: 18 pages, 1 figur
Adaptive Tag Selection for Image Annotation
Not all tags are relevant to an image, and the number of relevant tags is
image-dependent. Although many methods have been proposed for image
auto-annotation, the question of how to determine the number of tags to be
selected per image remains open. The main challenge is that for a large tag
vocabulary, there is often a lack of ground truth data for acquiring optimal
cutoff thresholds per tag. In contrast to previous works that pre-specify the
number of tags to be selected, we propose in this paper adaptive tag selection.
The key insight is to divide the vocabulary into two disjoint subsets, namely a
seen set consisting of tags having ground truth available for optimizing their
thresholds and a novel set consisting of tags without any ground truth. Such a
division allows us to estimate how many tags shall be selected from the novel
set according to the tags that have been selected from the seen set. The
effectiveness of the proposed method is justified by our participation in the
ImageCLEF 2014 image annotation task. On a set of 2,065 test images with ground
truth available for 207 tags, the benchmark evaluation shows that compared to
the popular top- strategy which obtains an F-score of 0.122, adaptive tag
selection achieves a higher F-score of 0.223. Moreover, by treating the
underlying image annotation system as a black box, the new method can be used
as an easy plug-in to boost the performance of existing systems
Evaluation of recommender systems in streaming environments
Evaluation of recommender systems is typically done with finite datasets.
This means that conventional evaluation methodologies are only applicable in
offline experiments, where data and models are stationary. However, in real
world systems, user feedback is continuously generated, at unpredictable rates.
Given this setting, one important issue is how to evaluate algorithms in such a
streaming data environment. In this paper we propose a prequential evaluation
protocol for recommender systems, suitable for streaming data environments, but
also applicable in stationary settings. Using this protocol we are able to
monitor the evolution of algorithms' accuracy over time. Furthermore, we are
able to perform reliable comparative assessments of algorithms by computing
significance tests over a sliding window. We argue that besides being suitable
for streaming data, prequential evaluation allows the detection of phenomena
that would otherwise remain unnoticed in the evaluation of both offline and
online recommender systems.Comment: Workshop on 'Recommender Systems Evaluation: Dimensions and Design'
(REDD 2014), held in conjunction with RecSys 2014. October 10, 2014, Silicon
Valley, United State
On-line support vector machines for function approximation
This paper describes an on-line method for building epsilon-insensitive support vector machines for regression as described in (Vapnik, 1995). The method is an extension of the method developed by (Cauwenberghs & Poggio, 2000) for building incremental support vector machines for classification. Machines obtained by using this approach are equivalent to the ones obtained by applying exact methods like quadratic programming, but they are obtained more quickly and allow the incremental addition of new points, removal of existing points and update of target values for existing data. This development opens the application of SVM regression to areas such as on-line prediction of temporal series or generalization of value functions in reinforcement learning.Postprint (published version
Effective classifiers for detecting objects
Several state-of-the-art machine learning classifiers are compared for the purposes of object detection in complex images, using global image features derived from the Ohta color space and Local Binary Patterns. Image complexity in this sense refers to the degree to which the target objects are occluded and/or non-dominant (i.e. not in the foreground) in the image, and also the degree to which the images are cluttered with non-target objects. The results indicate that a voting ensemble of Support Vector Machines, Random Forests, and Boosted Decision Trees provide the best performance with AUC values of up to 0.92 and Equal Error Rate accuracies of up to 85.7% in stratified 10-fold cross validation experiments on the GRAZ02 complex image dataset
- …