2,445 research outputs found

    Fuzzy ART: Fast Stable Learning and Categorization of Analog Patterns by an Adaptive Resonance System

    Full text link
    A Fuzzy ART model capable of rapid stable learning of recognition categories in response to arbitrary sequences of analog or binary input patterns is described. Fuzzy ART incorporates computations from fuzzy set theory into the ART 1 neural network, which learns to categorize only binary input patterns. The generalization to learning both analog and binary input patterns is achieved by replacing appearances of the intersection operator (n) in AHT 1 by the MIN operator (Λ) of fuzzy set theory. The MIN operator reduces to the intersection operator in the binary case. Category proliferation is prevented by normalizing input vectors at a preprocessing stage. A normalization procedure called complement coding leads to a symmetric theory in which the MIN operator (Λ) and the MAX operator (v) of fuzzy set theory play complementary roles. Complement coding uses on-cells and off-cells to represent the input pattern, and preserves individual feature amplitudes while normalizing the total on-cell/off-cell vector. Learning is stable because all adaptive weights can only decrease in time. Decreasing weights correspond to increasing sizes of category "boxes". Smaller vigilance values lead to larger category boxes. Learning stops when the input space is covered by boxes. With fast learning and a finite input set of arbitrary size and composition, learning stabilizes after just one presentation of each input pattern. A fast-commit slow-recode option combines fast learning with a forgetting rule that buffers system memory against noise. Using this option, rare events can be rapidly learned, yet previously learned memories are not rapidly erased in response to statistically unreliable input fluctuations.British Petroleum (89-A-1204); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-90-00530); Air Force Office of Scientific Research (90-0175

    Expanding the boundaries of evaluative learning research: how intersecting regularities shape our likes and dislikes

    Get PDF
    Over the last 30 years, researchers have identified several types of procedures through which novel preferences may be formed and existing ones altered. For instance, regularities in the presence of a single stimulus (as in the case of mere exposure) or 2 or more stimuli (as in the case of evaluative conditioning) have been shown to influence liking. We propose that intersections between regularities represent a previously unrecognized class of procedures for changing liking. Across 4 related studies, we found strong support for the hypothesis that when environmental regularities intersect with one another (i.e., share elements or have elements that share relations with other elements), the evaluative properties of the elements of those regularities can change. These changes in liking were observed across a range of stimuli and procedures and were evident when self-report measures, implicit measures, and behavioral choice measures of liking were employed. Functional and mental explanations of this phenomenon are offered followed by a discussion of how this new type of evaluative learning effect can accelerate theoretical, methodological, and empirical development in attitude research

    Object Action Complexes as an Interface for Planning and Robot Control

    Get PDF
    Abstract — Much prior work in integrating high-level artificial intelligence planning technology with low-level robotic control has foundered on the significant representational differences between these two areas of research. We discuss a proposed solution to this representational discontinuity in the form of object-action complexes (OACs). The pairing of actions and objects in a single interface representation captures the needs of both reasoning levels, and will enable machine learning of high-level action representations from low-level control representations. I. Introduction and Background The different representations that are effective for continuous control of robotic systems and the discrete symbolic AI presents a significant challenge for integrating AI planning research and robotics. These areas of research should be abl

    Making Neural QA as Simple as Possible but not Simpler

    Full text link
    Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-to-end neural architectures for QA. Increasingly complex systems have been conceived without comparison to simpler neural baseline systems that would justify their complexity. In this work, we propose a simple heuristic that guides the development of neural baseline systems for the extractive QA task. We find that there are two ingredients necessary for building a high-performing neural QA system: first, the awareness of question words while processing the context and second, a composition function that goes beyond simple bag-of-words modeling, such as recurrent neural networks. Our results show that FastQA, a system that meets these two requirements, can achieve very competitive performance compared with existing models. We argue that this surprising finding puts results of previous systems and the complexity of recent QA datasets into perspective

    A model of the emergence and evolution of integrated worldviews

    Get PDF
    It \ud is proposed that the ability of humans to flourish in diverse \ud environments and evolve complex cultures reflects the following two \ud underlying cognitive transitions. The transition from the \ud coarse-grained associative memory of Homo habilis to the \ud fine-grained memory of Homo erectus enabled limited \ud representational redescription of perceptually similar episodes, \ud abstraction, and analytic thought, the last of which is modeled as \ud the formation of states and of lattices of properties and contexts \ud for concepts. The transition to the modern mind of Homo \ud sapiens is proposed to have resulted from onset of the capacity to \ud spontaneously and temporarily shift to an associative mode of thought \ud conducive to interaction amongst seemingly disparate concepts, \ud modeled as the forging of conjunctions resulting in states of \ud entanglement. The fruits of associative thought became ingredients \ud for analytic thought, and vice versa. The ratio of \ud associative pathways to concepts surpassed a percolation threshold \ud resulting in the emergence of a self-modifying, integrated internal \ud model of the world, or worldview

    ARTMAP: Supervised Real-Time Learning and Classification of Nonstationary Data by a Self-Organizing Neural Network

    Full text link
    This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.British Petroleum (98-A-1204); Defense Advanced Research Projects Agency (90-0083, 90-0175, 90-0128); National Science Foundation (IRI-90-00539); Army Research Office (DAAL-03-88-K0088

    Cortex, countercurrent context, and dimensional integration of lifetime memory

    Get PDF
    The correlation between relative neocortex size and longevity in mammals encourages a search for a cortical function specifically related to the life-span. A candidate in the domain of permanent and cumulative memory storage is proposed and explored in relation to basic aspects of cortical organization. The pattern of cortico-cortical connectivity between functionally specialized areas and the laminar organization of that connectivity converges on a globally coherent representational space in which contextual embedding of information emerges as an obligatory feature of cortical function. This brings a powerful mode of inductive knowledge within reach of mammalian adaptations, a mode which combines item specificity with classificatory generality. Its neural implementation is proposed to depend on an obligatory interaction between the oppositely directed feedforward and feedback currents of cortical activity, in countercurrent fashion. Direct interaction of the two streams along their cortex-wide local interface supports a scheme of "contextual capture" for information storage responsible for the lifelong cumulative growth of a uniquely cortical form of memory termed "personal history." This approach to cortical function helps elucidate key features of cortical organization as well as cognitive aspects of mammalian life history strategies

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    Mapping dynamic interactions among cognitive biases in depression

    Get PDF
    Depression is theorized to be caused in part by biased cognitive processing of emotional information. Yet, prior research has adopted a reductionist approach that does not characterize how biases in cognitive processes such as attention and memory work together to confer risk for this complex multifactorial disorder. Grounded in affective and cognitive science, we highlight four mechanisms to understand how attention biases, working memory difficulties, and long-term memory biases interact and contribute to depression. We review evidence for each mechanism and highlight time- and context-dependent dynamics. We outline methodological considerations and recommendations for research in this area. We conclude with directions to advance the understanding of depression risk, cognitive training interventions, and transdiagnostic properties of cognitive biases and their interactions
    corecore