2,076 research outputs found
Classification and Geometry of General Perceptual Manifolds
Perceptual manifolds arise when a neural population responds to an ensemble
of sensory signals associated with different physical features (e.g.,
orientation, pose, scale, location, and intensity) of the same perceptual
object. Object recognition and discrimination requires classifying the
manifolds in a manner that is insensitive to variability within a manifold. How
neuronal systems give rise to invariant object classification and recognition
is a fundamental problem in brain theory as well as in machine learning. Here
we study the ability of a readout network to classify objects from their
perceptual manifold representations. We develop a statistical mechanical theory
for the linear classification of manifolds with arbitrary geometry revealing a
remarkable relation to the mathematics of conic decomposition. Novel
geometrical measures of manifold radius and manifold dimension are introduced
which can explain the classification capacity for manifolds of various
geometries. The general theory is demonstrated on a number of representative
manifolds, including L2 ellipsoids prototypical of strictly convex manifolds,
L1 balls representing polytopes consisting of finite sample points, and
orientation manifolds which arise from neurons tuned to respond to a continuous
angle variable, such as object orientation. The effects of label sparsity on
the classification capacity of manifolds are elucidated, revealing a scaling
relation between label sparsity and manifold radius. Theoretical predictions
are corroborated by numerical simulations using recently developed algorithms
to compute maximum margin solutions for manifold dichotomies. Our theory and
its extensions provide a powerful and rich framework for applying statistical
mechanics of linear classification to data arising from neuronal responses to
object stimuli, as well as to artificial deep networks trained for object
recognition tasks.Comment: 24 pages, 12 figures, Supplementary Material
Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain
The Structure and Interpretation of Cosmology: Part I - General Relativistic Cosmology
The purpose of this work is to review, clarify, and critically analyse modern
mathematical cosmology. The emphasis is upon mathematical objects and
structures, rather than numerical computations. This paper concentrates on
general relativistic cosmology. The opening section reviews and clarifies the
Friedmann-Robertson-Walker models of general relativistic cosmology, while
Section 2 deals with the spatially homogeneous models. Particular attention is
paid in these opening sections to the topological and geometrical aspects of
cosmological models. Section 3 explains how the mathematical formalism can be
linked with astronomical observation. In particular, the informal,
observational notion of the celestial sphere is given a rigorous mathematical
implementation. Part II of this work will concentrate on inflationary cosmology
and quantum cosmology
All the shapes of spaces: a census of small 3-manifolds
In this work we present a complete (no misses, no duplicates) census for
closed, connected, orientable and prime 3-manifolds induced by plane graphs
with a bipartition of its edge set (blinks) up to edges. Blinks form a
universal encoding for such manifolds. In fact, each such a manifold is a
subtle class of blinks, \cite{lins2013B}. Blinks are in 1-1 correpondence with
{\em blackboard framed links}, \cite {kauffman1991knots, kauffman1994tlr} We
hope that this census becomes as useful for the study of concrete examples of
3-manifolds as the tables of knots are in the study of knots and links.Comment: 31 pages, 17 figures, 38 references. In this version we introduce
some new material concerning composite manifold
- …