73,569 research outputs found
The Complementary Brain: From Brain Dynamics To Conscious Experiences
How do our brains so effectively achieve adaptive behavior in a changing world? Evidence is reviewed that brains are organized into parallel processing streams with complementary properties. Hierarchical interactions within each stream and parallel interactions between streams create coherent behavioral representations that overcome the complementary deficiencies of each stream and support unitary conscious experiences. This perspective suggests how brain design reflects the organization of the physical world with which brains interact, and suggests an alternative to the computer metaphor suggesting that brains are organized into independent modules. Examples from perception, learning, cognition, and action are described, and theoretical concepts and mechanisms by which complementarity is accomplished are summarized.Defense Advanced Research Projects and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-1-0657
The Complementary Brain: A Unifying View of Brain Specialization and Modularity
Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-I-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-I-0657
How Does Our Visual System Achieve Shift and Size Invariance?
The question of shift and size invariance in the primate
visual system is discussed. After a short review of the relevant neurobiology and psychophysics, a more detailed analysis of computational models is given. The two main types of networks considered are the dynamic routing circuit model and invariant feature networks, such as the neocognitron. Some specific open questions in context of these models are raised and possible solutions discussed
Connecting Look and Feel: Associating the visual and tactile properties of physical materials
For machines to interact with the physical world, they must understand the
physical properties of objects and materials they encounter. We use fabrics as
an example of a deformable material with a rich set of mechanical properties. A
thin flexible fabric, when draped, tends to look different from a heavy stiff
fabric. It also feels different when touched. Using a collection of 118 fabric
sample, we captured color and depth images of draped fabrics along with tactile
data from a high resolution touch sensor. We then sought to associate the
information from vision and touch by jointly training CNNs across the three
modalities. Through the CNN, each input, regardless of the modality, generates
an embedding vector that records the fabric's physical property. By comparing
the embeddings, our system is able to look at a fabric image and predict how it
will feel, and vice versa. We also show that a system jointly trained on vision
and touch data can outperform a similar system trained only on visual data when
tested purely with visual inputs
How Does the Cerebral Cortex Work? Developement, Learning, Attention, and 3D Vision by Laminar Circuits of Visual Cortex
A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624
View-Invariant Object Category Learning, Recognition, and Search: How Spatial and Object Attention Are Coordinated Using Surface-Based Attentional Shrouds
Air Force Office of Scientific Research (F49620-01-1-0397); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
Unsupervised learning of human motion
An unsupervised learning algorithm that can obtain a probabilistic model of an object composed of a collection of parts (a moving human body in our examples) automatically from unlabeled training data is presented. The training data include both useful "foreground" features as well as features that arise from irrelevant background clutter - the correspondence between parts and detected features is unknown. The joint probability density function of the parts is represented by a mixture of decomposable triangulated graphs which allow for fast detection. To learn the model structure as well as model parameters, an EM-like algorithm is developed where the labeling of the data (part assignments) is treated as hidden variables. The unsupervised learning technique is not limited to decomposable triangulated graphs. The efficiency and effectiveness of our algorithm is demonstrated by applying it to generate models of human motion automatically from unlabeled image sequences, and testing the learned models on a variety of sequences
A role for recurrent processing in object completion: neurophysiological, psychophysical and computational"evidence
Recognition of objects from partial information presents a significant
challenge for theories of vision because it requires spatial integration and
extrapolation from prior knowledge. We combined neurophysiological recordings
in human cortex with psychophysical measurements and computational modeling to
investigate the mechanisms involved in object completion. We recorded
intracranial field potentials from 1,699 electrodes in 18 epilepsy patients to
measure the timing and selectivity of responses along human visual cortex to
whole and partial objects. Responses along the ventral visual stream remained
selective despite showing only 9-25% of the object. However, these visually
selective signals emerged ~100 ms later for partial versus whole objects. The
processing delays were particularly pronounced in higher visual areas within
the ventral stream, suggesting the involvement of additional recurrent
processing. In separate psychophysics experiments, disrupting this recurrent
computation with a backward mask at ~75ms significantly impaired recognition of
partial, but not whole, objects. Additionally, computational modeling shows
that the performance of a purely bottom-up architecture is impaired by heavy
occlusion and that this effect can be partially rescued via the incorporation
of top-down connections. These results provide spatiotemporal constraints on
theories of object recognition that involve recurrent processing to recognize
objects from partial information
- …