5,190 research outputs found
The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction
Stimulus dimensionality-reduction methods in neuroscience seek to identify a
low-dimensional space of stimulus features that affect a neuron's probability
of spiking. One popular method, known as maximally informative dimensions
(MID), uses an information-theoretic quantity known as "single-spike
information" to identify this space. Here we examine MID from a model-based
perspective. We show that MID is a maximum-likelihood estimator for the
parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical
single-spike information corresponds to the normalized log-likelihood under a
Poisson model. This equivalence implies that MID does not necessarily find
maximally informative stimulus dimensions when spiking is not well described as
Poisson. We provide several examples to illustrate this shortcoming, and derive
a lower bound on the information lost when spiking is Bernoulli in discrete
time bins. To overcome this limitation, we introduce model-based dimensionality
reduction methods for neurons with non-Poisson firing statistics, and show that
they can be framed equivalently in likelihood-based or information-theoretic
terms. Finally, we show how to overcome practical limitations on the number of
stimulus dimensions that MID can estimate by constraining the form of the
non-parametric nonlinearity in an LNP model. We illustrate these methods with
simulations and data from primate visual cortex
Unsupervised Learning via Total Correlation Explanation
Learning by children and animals occurs effortlessly and largely without
obvious supervision. Successes in automating supervised learning have not
translated to the more ambiguous realm of unsupervised learning where goals and
labels are not provided. Barlow (1961) suggested that the signal that brains
leverage for unsupervised learning is dependence, or redundancy, in the sensory
environment. Dependence can be characterized using the information-theoretic
multivariate mutual information measure called total correlation. The principle
of Total Cor-relation Ex-planation (CorEx) is to learn representations of data
that "explain" as much dependence in the data as possible. We review some
manifestations of this principle along with successes in unsupervised learning
problems across diverse domains including human behavior, biology, and
language.Comment: Invited contribution for IJCAI 2017 Early Career Spotlight. 5 pages,
1 figur
Visual Representations: Defining Properties and Deep Approximations
Visual representations are defined in terms of minimal sufficient statistics
of visual data, for a class of tasks, that are also invariant to nuisance
variability. Minimal sufficiency guarantees that we can store a representation
in lieu of raw data with smallest complexity and no performance loss on the
task at hand. Invariance guarantees that the statistic is constant with respect
to uninformative transformations of the data. We derive analytical expressions
for such representations and show they are related to feature descriptors
commonly used in computer vision, as well as to convolutional neural networks.
This link highlights the assumptions and approximations tacitly assumed by
these methods and explains empirical practices such as clamping, pooling and
joint normalization.Comment: UCLA CSD TR140023, Nov. 12, 2014, revised April 13, 2015, November
13, 2015, February 28, 201
Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping
The lack of reliable data in developing countries is a major obstacle to
sustainable development, food security, and disaster relief. Poverty data, for
example, is typically scarce, sparse in coverage, and labor-intensive to
obtain. Remote sensing data such as high-resolution satellite imagery, on the
other hand, is becoming increasingly available and inexpensive. Unfortunately,
such data is highly unstructured and currently no techniques exist to
automatically extract useful insights to inform policy decisions and help
direct humanitarian efforts. We propose a novel machine learning approach to
extract large-scale socioeconomic indicators from high-resolution satellite
imagery. The main challenge is that training data is very scarce, making it
difficult to apply modern techniques such as Convolutional Neural Networks
(CNN). We therefore propose a transfer learning approach where nighttime light
intensities are used as a data-rich proxy. We train a fully convolutional CNN
model to predict nighttime lights from daytime imagery, simultaneously learning
features that are useful for poverty prediction. The model learns filters
identifying different terrains and man-made structures, including roads,
buildings, and farmlands, without any supervision beyond nighttime lights. We
demonstrate that these learned features are highly informative for poverty
mapping, even approaching the predictive performance of survey data collected
in the field.Comment: In Proc. 30th AAAI Conference on Artificial Intelligenc
- …