14,221 research outputs found
The True Destination of EGO is Multi-local Optimization
Efficient global optimization is a popular algorithm for the optimization of
expensive multimodal black-box functions. One important reason for its
popularity is its theoretical foundation of global convergence. However, as the
budgets in expensive optimization are very small, the asymptotic properties
only play a minor role and the algorithm sometimes comes off badly in
experimental comparisons. Many alternative variants have therefore been
proposed over the years. In this work, we show experimentally that the
algorithm instead has its strength in a setting where multiple optima are to be
identified
Variable selection for model-based clustering using the integrated complete-data likelihood
Variable selection in cluster analysis is important yet challenging. It can
be achieved by regularization methods, which realize a trade-off between the
clustering accuracy and the number of selected variables by using a lasso-type
penalty. However, the calibration of the penalty term can suffer from
criticisms. Model selection methods are an efficient alternative, yet they
require a difficult optimization of an information criterion which involves
combinatorial problems. First, most of these optimization algorithms are based
on a suboptimal procedure (e.g. stepwise method). Second, the algorithms are
often greedy because they need multiple calls of EM algorithms. Here we propose
to use a new information criterion based on the integrated complete-data
likelihood. It does not require any estimate and its maximization is simple and
computationally efficient. The original contribution of our approach is to
perform the model selection without requiring any parameter estimation. Then,
parameter inference is needed only for the unique selected model. This approach
is used for the variable selection of a Gaussian mixture model with conditional
independence assumption. The numerical experiments on simulated and benchmark
datasets show that the proposed method often outperforms two classical
approaches for variable selection.Comment: submitted to Statistics and Computin
Iterative Residual Rescaling: An Analysis and Generalization of LSI
We consider the problem of creating document representations in which
inter-document similarity measurements correspond to semantic similarity. We
first present a novel subspace-based framework for formalizing this task. Using
this framework, we derive a new analysis of Latent Semantic Indexing (LSI),
showing a precise relationship between its performance and the uniformity of
the underlying distribution of documents over topics. This analysis helps
explain the improvements gained by Ando's (2000) Iterative Residual Rescaling
(IRR) algorithm: IRR can compensate for distributional non-uniformity. A
further benefit of our framework is that it provides a well-motivated,
effective method for automatically determining the rescaling factor IRR depends
on, leading to further improvements. A series of experiments over various
settings and with several evaluation metrics validates our claims.Comment: To appear in the proceedings of SIGIR 2001. 11 page
- …