2,822 research outputs found

    A What-and-Where Neural Network for Invariant Image Preprocessing

    Full text link
    A feedforward neural network for invariant image preprocessing is proposed that represents the position1 orientation and size of an image figure (where it is) in a multiplexed spatial map. This map is used to generate an invariant representation of the figure that is insensitive to position1 orientation, and size for purposes of pattern recognition (what it is). A multiscale array of oriented filters followed by competition between orientations and scales is used to define the Where filter.British Petroleum (89-A-1024); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI 90-00530); Office of Naval Research (N0014-91-J-4100); Air Force Office of Scientific Research (90-0175); NSF Graduate Fellowshi

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    A Neural Network Model for the Development of Simple and Complex Cell Receptive Fields Within Cortical Maps of Orientation and Ocular Dominance

    Full text link
    Prenatal development of the primary visual cortex leads to simple cells with spatially distinct and oriented ON and OFF subregions. These simple cells are organized into spatial maps of orientation and ocular dominance that exhibit singularities, fractures, and linear zones. On a finer spatial scale, simple cells occur that are sensitive to similar orientations but opposite contrast polarities, and exhibit both even-symmetric and odd-symmetric receptive fields. Pooling of outputs from oppositely polarized simple cells leads to complex cells that respond to both contrast polarities. A neural network model is described which simulates how simple and complex cells self-organize starting from unsegregated and unoriented geniculocortical inputs during prenatal development. Neighboring simple cells that are sensitive to opposite contrast polarities develop from a combination of spatially short-range inhibition and high-gain recurrent habituative excitation between cells that obey membrane equations. Habituation, or depression, of synapses controls reset of cell activations both through enhanced ON responses and OFF antagonistic rebounds. Orientation and ocular dominance maps form when high-gain medium-range recurrent excitation and long-range inhibition interact with the short-range mechanisms. The resulting structure clarifies how simple and complex cells contribute to perceptual processes such as texture segregation and perceptual grouping.Air Force Office of Scientific Research (F49620-92-J-0334); British Petroleum (BP 89A-1204); National Science Foundation (IRI-90-24877); Office of Naval Research (N00014-91-J-4100); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409

    Linking the Laminar Circuits of Visual Cortex to Visual Perception

    Full text link
    A detailed neural model is being developed of how the laminar circuits of visual cortical areas V1 and V2 implement context-sensitive binding processes such as perceptual grouping and attention, and develop and learn in a stable way. The model clarifies how preattentive and attentive perceptual mechanisms are linked within these laminar circuits, notably how bottom-up, top-down, and horizontal cortical connections interact. Laminar circuits allow the responses of visual cortical neurons to be influenced, not only by the stimuli within their classical receptive fields, but also by stimuli in the extra-classical surround. Such context-sensitive visual processing can greatly enhance the analysis of visual scenes, especially those containing targets that are low contrast, partially occluded, or crowded by distractors. Attentional enhancement can selectively propagate along groupings of both real and illusory contours, thereby showing how attention can selectively enhance object representations. Model mechanisms clarify how intracortical and intercortical feedback help to stabilize cortical development and learning. Although feedback plays a key role, fast feedforward processing is possible in response to unambiguous information.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0657

    Boundary Contour System and Feature Contour System

    Full text link
    When humans gaze upon a scene, our brains rapidly combine several different types of locally ambiguous visual information to generate a globally consistent and unambiguous representation of Form-And-Color-And-DEpth, or FACADE. This state of affairs raises the question: What new computational principles and mechanisms are needed to understand how multiple sources of visual information cooperate automatically to generate a percept of 3-dimensional form? This chapter reviews some modeling work aimed at developing such a general-purpose vision architecture. This architecture clarifies how scenic data about boundaries, textures, shading, depth, multiple spatial scales, and motion can be cooperatively synthesized in real-time into a coherent representation of 3-dimensional form. It embodies a new vision theory that attempts to clarify the functional organzation of the visual brain from the lateral geniculate nucleus (LGN) to the extrastriate cortical regions V4 and MT. Moreover, the same processes which are useful towards explaining how the visual cortex processes retinal signals are equally valuable for processing noisy multidimensional data from artificial sensors, such as synthetic aperture radar, laser radar, multispectral infrared, magnetic resonance, and high-altitude photographs. These processes generate 3-D boundary and surface representations of a scene.Office of Naval Research (N00011-95-I-0409, N00014-95-I-0657

    The What-And-Where Filter: A Spatial Mapping Neural Network for Object Recognition and Image Understanding

    Full text link
    The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.Advanced Research Projects Agency (ONR-N00014-92-J-4015, AFOSR 90-0083); British Petroleum (89-A-1204); National Science Foundation (IRI-90-00530, Graduate Fellowship); Office of Naval Research (N00014-91-J-4100, N00014-95-1-0409, N00014-95-1-0657); Air Force Office of Scientific Research (F49620-92-J-0499, F49620-92-J-0334

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    A Neural Model of Timed Response Learning in the Cerebellum

    Full text link
    A spectral timing model is developed to explain how the cerebellum learns adaptively timed responses during the rabbit's conditioned nictitating membrane response (NMR). The model posits two learning sites that respectively enable conditioned excitation and timed disinhibition of the response. Long-term potentiation of mossy fiber pathways projecting to interpositus nucleus cells allows conditioned excitation of the response's adaptive gain. Long-term depression of parallel fiber- Purkinje cell synapses in the cerebellar cortex allows learning of an adaptively timed reduction in Purkinje cell inhibition of the same nuclear cells. A spectrum of partially timed responses summate to generate an accurately timed population response. In agreement with physiological data, the model Purkinje cell activity decreases in the interval following the onset of the conditioned stimulus, and nuclear cell responses match conditioned response (CR) topography. The model reproduces key behavioral features of the NMR, including the properties that CR peak amplitude occurs at the unconditioned stimulus (US) onset, a discrete CR peak shift occurs with a change in interstimulus interval (ISI) between conditioned stim- ulus (CS) and US, mixed training at two different ISis produces a double-peaked CR, CR acquisition and rate of responding depend unimodally on the lSI, CR onset latency decreases during training, and maladaptively-timed, small-amplitude CRs result from ablation of cerebellar cortex.National Science Foundation (IRI-90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0225

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Neural Dynamics Underlying Impaired Autonomic and Conditioned Responses Following Amygdala and Orbitofrontal Lesions

    Full text link
    A neural model is presented that explains how outcome-specific learning modulates affect, decision-making and Pavlovian conditioned approach responses. The model addresses how brain regions responsible for affective learning and habit learning interact, and answers a central question: What are the relative contributions of the amygdala and orbitofrontal cortex to emotion and behavior? In the model, the amygdala calculates outcome value while the orbitofrontal cortex influences attention and conditioned responding by assigning value information to stimuli. Model simulations replicate autonomic, electrophysiological, and behavioral data associated with three tasks commonly used to assay these phenomena: Food consumption, Pavlovian conditioning, and visual discrimination. Interactions of the basal ganglia and amygdala with sensory and orbitofrontal cortices enable the model to replicate the complex pattern of spared and impaired behavioral and emotional capacities seen following lesions of the amygdala and orbitofrontal cortex.National Science Foundation (SBE-0354378; IIS-97-20333); Office of Naval Research (N00014-01-1-0624); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952
    corecore