4,346 research outputs found

    Fuzzy ART

    Full text link
    Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP synthesize fuzzy logic and ART networks by exploiting the formal similarity between the computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic: intersection (∩) with the fuzzy intersection (∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric: theory in which the fuzzy inter:>ec:tion and the fuzzy union (∨), or component-wise maximum, play complementary roles. Complement coding preserves individual feature amplitudes while normalizing the input vector, and prevents a potential category proliferation problem. Adaptive weights :otart equal to one and can only decrease in time. A geometric interpretation of fuzzy AHT represents each category as a box that increases in size as weights decrease. A matching criterion controls search, determining how close an input and a learned representation must be for a category to accept the input as a new exemplar. A vigilance parameter (p) sets the matching criterion and determines how finely or coarsely an ART system will partition inputs. High vigilance creates fine categories, represented by small boxes. Learning stops when boxes cover the input space. With fast learning, fixed vigilance, and an arbitrary input set, learning stabilizes after just one presentation of each input. A fast-commit slow-recode option allows rapid learning of rare events yet buffers memories against recoding by noisy inputs. Fuzzy ARTMAP unites two fuzzy ART networks to solve supervised learning and prediction problems. A Minimax Learning Rule controls ARTMAP category structure, conjointly minimizing predictive error and maximizing code compression. Low vigilance maximizes compression but may therefore cause very different inputs to make the same prediction. When this coarse grouping strategy causes a predictive error, an internal match tracking control process increases vigilance just enough to correct the error. ARTMAP automatically constructs a minimal number of recognition categories, or "hidden units," to meet accuracy criteria. An ARTMAP voting strategy improves prediction by training the system several times using different orderings of the input set. Voting assigns confidence estimates to competing predictions given small, noisy, or incomplete training sets. ARPA benchmark simulations illustrate fuzzy ARTMAP dynamics. The chapter also compares fuzzy ARTMAP to Salzberg's Nested Generalized Exemplar (NGE) and to Simpson's Fuzzy Min-Max Classifier (FMMC); and concludes with a summary of ART and ARTMAP applications.Advanced Research Projects Agency (ONR N00014-92-J-4015); National Science Foundation (IRI-90-00530); Office of Naval Research (N00014-91-J-4100

    Adaptive Resonance Theory

    Full text link
    Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-l-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-l-0657

    A model of the emergence and evolution of integrated worldviews

    Get PDF
    It \ud is proposed that the ability of humans to flourish in diverse \ud environments and evolve complex cultures reflects the following two \ud underlying cognitive transitions. The transition from the \ud coarse-grained associative memory of Homo habilis to the \ud fine-grained memory of Homo erectus enabled limited \ud representational redescription of perceptually similar episodes, \ud abstraction, and analytic thought, the last of which is modeled as \ud the formation of states and of lattices of properties and contexts \ud for concepts. The transition to the modern mind of Homo \ud sapiens is proposed to have resulted from onset of the capacity to \ud spontaneously and temporarily shift to an associative mode of thought \ud conducive to interaction amongst seemingly disparate concepts, \ud modeled as the forging of conjunctions resulting in states of \ud entanglement. The fruits of associative thought became ingredients \ud for analytic thought, and vice versa. The ratio of \ud associative pathways to concepts surpassed a percolation threshold \ud resulting in the emergence of a self-modifying, integrated internal \ud model of the world, or worldview

    Video Compression from the Hardware Perspective

    Get PDF

    The neural cognitive architecture

    Get PDF
    The development of a cognitive architecture based on neurons is currently viable. An initial architecture is proposed, and is based around a slow serial system, and a fast parallel system, with additional subsystems for behaviours such as sensing, action and language. Current technology allows us to emulate millions of neurons in real time supporting the development and use of relatively sophisticated systems based on the architecture. While knowledge of biological neural processing and learning rules, and cognitive behaviour is extensive, it is far from complete. This architecture provides a slowly varying neural structure that forms the framework for cognition and learning. It will provide support for exploring biological neural behaviour in functioning animals, and support for the development of artificial systems based on neurons

    A study of data coding technology developments in the 1980-1985 time frame, volume 2

    Get PDF
    The source parameters of digitized analog data are discussed. Different data compression schemes are outlined and analysis of their implementation are presented. Finally, bandwidth compression techniques are given for video signals

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    Mathematical studies of coal measures sedimentation in Ayrshire, Scotland

    Get PDF
    corecore