121,197 research outputs found
Resampling methods for parameter-free and robust feature selection with mutual information
Combining the mutual information criterion with a forward feature selection
strategy offers a good trade-off between optimality of the selected feature
subset and computation time. However, it requires to set the parameter(s) of
the mutual information estimator and to determine when to halt the forward
procedure. These two choices are difficult to make because, as the
dimensionality of the subset increases, the estimation of the mutual
information becomes less and less reliable. This paper proposes to use
resampling methods, a K-fold cross-validation and the permutation test, to
address both issues. The resampling methods bring information about the
variance of the estimator, information which can then be used to automatically
set the parameter and to calculate a threshold to stop the forward procedure.
The procedure is illustrated on a synthetic dataset as well as on real-world
examples
Evolvability signatures of generative encodings: beyond standard performance benchmarks
Evolutionary robotics is a promising approach to autonomously synthesize
machines with abilities that resemble those of animals, but the field suffers
from a lack of strong foundations. In particular, evolutionary systems are
currently assessed solely by the fitness score their evolved artifacts can
achieve for a specific task, whereas such fitness-based comparisons provide
limited insights about how the same system would evaluate on different tasks,
and its adaptive capabilities to respond to changes in fitness (e.g., from
damages to the machine, or in new situations). To counter these limitations, we
introduce the concept of "evolvability signatures", which picture the
post-mutation statistical distribution of both behavior diversity (how
different are the robot behaviors after a mutation?) and fitness values (how
different is the fitness after a mutation?). We tested the relevance of this
concept by evolving controllers for hexapod robot locomotion using five
different genotype-to-phenotype mappings (direct encoding, generative encoding
of open-loop and closed-loop central pattern generators, generative encoding of
neural networks, and single-unit pattern generators (SUPG)). We observed a
predictive relationship between the evolvability signature of each encoding and
the number of generations required by hexapods to adapt from incurred damages.
Our study also reveals that, across the five investigated encodings, the SUPG
scheme achieved the best evolvability signature, and was always foremost in
recovering an effective gait following robot damages. Overall, our evolvability
signatures neatly complement existing task-performance benchmarks, and pave the
way for stronger foundations for research in evolutionary robotics.Comment: 24 pages with 12 figures in the main text, and 4 supplementary
figures. Accepted at Information Sciences journal (in press). Supplemental
videos are available online at, see http://goo.gl/uyY1R
Poor Philanthropist II: New approaches to sustainable development
The second title in the Poor Philanthropist Series, this monograph represents the culmination of a six-year journey; a journey characterised in the first three years by in-depth qualitative research which resulted in an understanding of philanthropic traditions among people who are poor in southern Africa and gave rise to new and innovative concepts which formed the focus of the research monograph The Poor Philanthropist: How and Why the Poor Help Each Other, published by the Southern Africa-United States Centre for Leadership and Public Values in 2005
Extensions of stability selection using subsamples of observations and covariates
We introduce extensions of stability selection, a method to stabilise
variable selection methods introduced by Meinshausen and B\"uhlmann (J R Stat
Soc 72:417-473, 2010). We propose to apply a base selection method repeatedly
to random observation subsamples and covariate subsets under scrutiny, and to
select covariates based on their selection frequency. We analyse the effects
and benefits of these extensions. Our analysis generalizes the theoretical
results of Meinshausen and B\"uhlmann (J R Stat Soc 72:417-473, 2010) from the
case of half-samples to subsamples of arbitrary size. We study, in a
theoretical manner, the effect of taking random covariate subsets using a
simplified score model. Finally we validate these extensions on numerical
experiments on both synthetic and real datasets, and compare the obtained
results in detail to the original stability selection method.Comment: accepted for publication in Statistics and Computin
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …