893 research outputs found

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Capacity, Fidelity, and Noise Tolerance of Associative Spatial-Temporal Memories Based on Memristive Neuromorphic Network

    Full text link
    We have calculated the key characteristics of associative (content-addressable) spatial-temporal memories based on neuromorphic networks with restricted connectivity - "CrossNets". Such networks may be naturally implemented in nanoelectronic hardware using hybrid CMOS/memristor circuits, which may feature extremely high energy efficiency, approaching that of biological cortical circuits, at much higher operation speed. Our numerical simulations, in some cases confirmed by analytical calculations, have shown that the characteristics depend substantially on the method of information recording into the memory. Of the four methods we have explored, two look especially promising - one based on the quadratic programming, and the other one being a specific discrete version of the gradient descent. The latter method provides a slightly lower memory capacity (at the same fidelity) then the former one, but it allows local recording, which may be more readily implemented in nanoelectronic hardware. Most importantly, at the synchronous retrieval, both methods provide a capacity higher than that of the well-known Ternary Content-Addressable Memories with the same number of nonvolatile memory cells (e.g., memristors), though the input noise immunity of the CrossNet memories is somewhat lower

    A Binary Neural Shape Matcher using Johnson Counters and Chain Codes

    Get PDF
    In this paper, we introduce a neural network-based shape matching algorithm that uses Johnson Counter codes coupled with chain codes. Shape matching is a fundamental requirement in content-based image retrieval systems. Chain codes describe shapes using sequences of numbers. They are simple and flexible. We couple this power with the efficiency and flexibility of a binary associative-memory neural network. We focus on the implementation details of the algorithm when it is constructed using the neural network. We demonstrate how the binary associative-memory neural network can index and match chain codes where the chain code elements are represented by Johnson codes

    Neuroeconomics, Philosophy of Mind, and Identity

    Get PDF

    Perceptual Generalization and Context in a Network MemoryInspired Long-Term Memory for Artificial Cognition

    Get PDF
    Abstract: In the framework of open-ended learning cognitive architectures for robots, this paper deals with thedesign of a Long-Term Memory (LTM) structure that can accommodate the progressive acquisition ofexperience-based decision capabilities, or what different authors call “automation” of what is learnt, asa complementary system to more common prospective functions. The LTM proposed here provides fora relational storage of knowledge nuggets given the form of artificial neural networks (ANNs) that isrepresentative of the contexts in which they are relevant in a configural associative structure. It alsoaddresses the problem of continuous perceptual spaces and the task- and context-related generalizationor categorization of perceptions in an autonomous manner within the embodied sensorimotor apparatusof the robot. These issues are analyzed and a solution is proposed through the introduction of two newtypes of knowledge nuggets: P-nodes representing perceptual classes and C-nodes representing contexts.The approach is studied and its performance evaluated through its implementation and application to areal robotic experimentXunta de Galicia; ED431C 2017/12Xunta de Galicia; ED341D R2016/01

    Analogical Mapping with Sparse Distributed Memory: A Simple Model that Learns to Generalize from Examples

    Get PDF
    Abstract We present a computational model for the analogical mapping of compositional structures that combines two existing ideas known as holistic mapping vectors and sparse distributed memory. The model enables integration of structural and semantic constraints when learning mappings of the type x i ! y i and computing analogies x j ! y j for novel inputs x j . The model has a one-shot learning process, is randomly initialized, and has three exogenous parameters: the dimensionality D of representations, the memory size S, and the probability v for activation of the memory. After learning three examples, the model generalizes correctly to novel examples. We find minima in the probability of generalization error for certain values of v, S, and the number of different mapping examples learned. These results indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of the memory is important. The optimal dimensionality of binary representations is of the order 10 4 , which is consistent with a known analytical estimate and the synapse count for most cortical neurons. We demonstrate that the model can learn analogical mappings of generic two-place relationships, and we calculate the error probabilities for recall and generalization

    Three Highly Parallel Computer Architectures and Their Suitability for Three Representative Artificial Intelligence Problems

    Get PDF
    Virtually all current Artificial Intelligence (AI) applications are designed to run on sequential (von Neumann) computer architectures. As a result, current systems do not scale up. As knowledge is added to these systems, a point is reached where their performance quickly degrades. The performance of a von Neumann machine is limited by the bandwidth between memory and processor (the von Neumann bottleneck). The bottleneck is avoided by distributing the processing power across the memory of the computer. In this scheme the memory becomes the processor (a smart memory ). This paper highlights the relationship between three representative AI application domains, namely knowledge representation, rule-based expert systems, and vision, and their parallel hardware realizations. Three machines, covering a wide range of fundamental properties of parallel processors, namely module granularity, concurrency control, and communication geometry, are reviewed: the Connection Machine (a fine-grained SIMD hypercube), DADO (a medium-grained MIMD/SIMD/MSIMD tree-machine), and the Butterfly (a coarse-grained MIMD Butterflyswitch machine)

    Information Processing, Computation and Cognition

    Get PDF
    Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism/computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects
    • 

    corecore