1,585 research outputs found

    ARTEX: A Self-Organizing Architecture for Classifying Image Regions

    Full text link
    A self-organizing architect is developed for image region classification. The system consists of a preprocessor that utilizes multi-scale filtering, competition, cooperation, and diffusion to compute a vector of image boundary and surface properties, notably texture and brightness properties. This vector inputs to a system that incrementally learns noisy multidimensional mappings and their probabilities. The architecture is applied to diflicult real-world image classification problems, including classification of synthetic aperture radar and natural textural images, and outperforms a recent state-of-the-art system at classifying natural textures.Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100, N00014-95-1-0409); Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-4015, F49620-92-J-0334); National Science Foundation (IRI-90-00530, IRI-90-24877). an

    Transferring saturation, the finite cover property, and stability

    Full text link
    Saturation is (mu,kappa)-transferable in T if and only if there is an expansion T_1 of T with |T_1| = |T| such that if M is a mu-saturated model of T_1 and |M| \geq kappa then the reduct M|L(T) is kappa-saturated. We characterize theories which are superstable without the finite cover property (f.c.p.), or without f.c.p. as, respectively those where saturation is (aleph_0,lambda)-transferable or (kappa(T),lambda)-transferable for all lambda. Further if for some mu \geq |T|, 2^mu > mu^+, stability is equivalent to: or all mu \geq |T|, saturation is (\mu,2^mu)-transferable.Comment: This version replaces the 1995 submission: Characterization of the finite cover property and stability. This version submitted by John T. Baldwin. The paper has been accepted for the Journal of Symbolic Logi

    Neural Dynamics Underlying Impaired Autonomic and Conditioned Responses Following Amygdala and Orbitofrontal Lesions

    Full text link
    A neural model is presented that explains how outcome-specific learning modulates affect, decision-making and Pavlovian conditioned approach responses. The model addresses how brain regions responsible for affective learning and habit learning interact, and answers a central question: What are the relative contributions of the amygdala and orbitofrontal cortex to emotion and behavior? In the model, the amygdala calculates outcome value while the orbitofrontal cortex influences attention and conditioned responding by assigning value information to stimuli. Model simulations replicate autonomic, electrophysiological, and behavioral data associated with three tasks commonly used to assay these phenomena: Food consumption, Pavlovian conditioning, and visual discrimination. Interactions of the basal ganglia and amygdala with sensory and orbitofrontal cortices enable the model to replicate the complex pattern of spared and impaired behavioral and emotional capacities seen following lesions of the amygdala and orbitofrontal cortex.National Science Foundation (SBE-0354378; IIS-97-20333); Office of Naval Research (N00014-01-1-0624); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952

    Dopaminergic and Non-Dopaminergic Value Systems in Conditioning and Outcome-Specific Revaluation

    Full text link
    Animals are motivated to choose environmental options that can best satisfy current needs. To explain such choices, this paper introduces the MOTIVATOR (Matching Objects To Internal Values Triggers Option Revaluations) neural model. MOTIVATOR describes cognitiveemotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. The amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices. The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Institutes of Health (R29-DC02952, R01-DC007683); National Science Foundation (IIS-97-20333, SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Landsat Satellite Image Segmentation Using the Fuzzy ARTMAP Neural Network

    Full text link
    This application illustrates how the fuzzy ARTMAP neural network can be used to monitor environmental changes. A benchmark problem seeks to classify regions of a Landsat image into six soil and crop classes based on images from four spectral sensors. Simulations show that fuzzy ARTMAP outperforms fourteen other neural network and machine learning algorithms. Only the k-Nearest-Neighbor algorithm shows better performance (91% vs. 89%) but without any code compression, while fuzzy ARTMAP achieves a code compression ratio of 6:1. Even with a code compression ratio of 50:1 fuzzy ARTMAP still maintains good performance (83%). This example shows how fuzzy ARTMAP can combine accuracy and code compression in real-world applications.Office of Naval Research (N00014-92-J-401J, N00014-91-J-4100, N00014-92-J-4015); National Science Foundation (IRI 90-00530

    Task-Irrelevant Perceptual Learning Specific to the Contrast Polarity of Motion Stimuli

    Full text link
    Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are of great controversy, largely because such learning can often be attributed to plasticity in later stages of sensory processing or in the decision processes. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity, by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in conjunction with the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show learning for the exposed contrast polarity and that this learning does not transfer to the unexposed contrast polarity. These results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells.CELEST, an NSF Science of Learning Center (SBE-0354378); Defense Advanced Research Projects Agency SyNAPSE program (HR0011-09-3-0001, HR001-09-C-0011); National Science Foundation (BCS-0549036); National Institutes of Health (R21 EY017737

    Landsat Satellite Image Segmentation Using the Fuzzy ARTMAP Neural Network

    Full text link
    This application illustrates how the fuzzy ARTMAP neural network can be used to monitor environmental changes. A benchmark problem seeks to classify regions of a Landsat image into six soil and crop classes based on images from four spectral sensors. Simulations show that fuzzy ARTMAP outperforms fourteen other neural network and machine learning algorithms. Only the k-Nearest-Neighbor algorithm shows better performance (91% vs. 89%) but without any code compression, while fuzzy ARTMAP achieves a code compression ratio of 6:1. Even with a code compression ratio of 50:1 fuzzy ARTMAP still maintains good performance (83%). This example shows how fuzzy ARTMAP can combine accuracy and code compression in real-world applications.Office of Naval Research (N00014-92-J-401J, N00014-91-J-4100, N00014-92-J-4015); National Science Foundation (IRI 90-00530

    Fusion Artmap: A Neural Network Architecture for Multi-Channel Data Fusion and Classification

    Full text link
    Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Single-channel Fusion ARTMAP is functionally equivalent to Fuzzy ART during unsupervised learning and to Fuzzy ARTMAP during supervised learning. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, become inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called paraellel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of them. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network. Fusion ARTMAP's multi-channel coding is illustrated by simulations of the Quadruped Mammal database.Defense Advanced Research Projects Agency (ONR N0014-92-J-401J, AFOSR 90-0083, ONR N00014-92-J-4015); National Science Foundation (IRI-90-00530, IRI-90-24877, Graduate Fellowship); Office of Naval Research (N00014-91-J-4100); British Petroleum (89-A-1204); Air Force Office of Scientific Research (F49620-92-J-0334

    Fusion ARTMAP: An Adaptive Fuzzy Network for Multi-Channel Classification

    Full text link
    Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Fusion ARTMAP generalizes the fuzzy ARTMAP architecture in order to adaptively classify multi-channel data. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, beco1ne inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called parallel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of thmn. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network.Advanced Research Projects Agency (ONR N00014-92-J-401J, ONR N00014-92-J-4015); National Science Foundation (IRI-90-00530, IRI-90-24877, Graduate Fellowship); Office of Naval Research (N00014-91-J-4100); British Petroleum (89-A-1204); Air Force Office of Scientific Research (F49620-92-J-0334

    Recognition of 3-D Objects from Multiple 2-D Views by a Self-Organizing Neural Architecture

    Full text link
    The recognition of 3-D objects from sequences of their 2-D views is modeled by a neural architecture, called VIEWNET that uses View Information Encoded With NETworks. VIEWNET illustrates how several types of noise and varialbility in image data can be progressively removed while incornplcte image features are restored and invariant features are discovered using an appropriately designed cascade of processing stages. VIEWNET first processes 2-D views of 3-D objects using the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and removes noise from the images. Boundary regularization and cornpletion are achieved by the same mechanisms that suppress image noise. A log-polar transform is taken with respect to the centroid of the resulting figure and then re-centered to achieve 2-D scale and rotation invariance. The invariant images are coarse coded to further reduce noise, reduce foreshortening effects, and increase generalization. These compressed codes are input into a supervised learning system based on the fuzzy ARTMAP algorithm. Recognition categories of 2-D views are learned before evidence from sequences of 2-D view categories is accumulated to improve object recognition. Recognition is studied with noisy and clean images using slow and fast learning. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of 2-D views of jet aircraft with and without additive noise. A recognition rate of 90% is achieved with one 2-D view category and of 98.5% correct with three 2-D view categories.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-91-J-1309, N00014-91-J-4100, N00014-92-J-0499); Air Force Office of Scientific Research (F9620-92-J-0499, 90-0083
    • …
    corecore