7,292 research outputs found

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Overlapping memory replay during sleep builds cognitive schemata

    Get PDF
    Sleep enhances integration across multiple stimuli, abstraction of general rules, insight into hidden solutions and false memory formation. Newly learned information is better assimilated if compatible with an existing cognitive framework or schema. This article proposes a mechanism by which the reactivation of newly learned memories during sleep could actively underpin both schema formation and the addition of new knowledge to existing schemata. Under this model, the overlapping replay of related memories selectively strengthens shared elements. Repeated reactivation of memories in different combinations progressively builds schematic representations of the relationships between stimuli. We argue that this selective strengthening forms the basis of cognitive abstraction, and explain how it facilitates insight and false memory formation

    Learning, Categorization, Rule Formation, and Prediction by Fuzzy Neural Networks

    Full text link
    National Science Foundation (IRI 94-01659); Office of Naval Research (N00014-91-J-4100, N00014-92-J-4015) Air Force Office of Scientific Research (90-0083, N00014-92-J-4015

    Data Mining

    Get PDF

    A comprehensive conceptual and computational dynamics framework for autonomous regeneration of form and function in biological organisms

    Get PDF
    In biology, regeneration is a mysterious phenomenon that has inspired self-repairing systems, robots, and biobots. It is a collective computational process whereby cells communicate to achieve an anatomical set point and restore original function in regenerated tissue or the whole organism. Despite decades of research, the mechanisms involved in this process are still poorly understood. Likewise, the current algorithms are insufficient to overcome this knowledge barrier and enable advances in regenerative medicine, synthetic biology, and living machines/biobots. We propose a comprehensive conceptual framework for the engine of regeneration with hypotheses for the mechanisms and algorithms of stem cell-mediated regeneration that enables a system like the planarian flatworm to fully restore anatomical (form) and bioelectric (function) homeostasis from any small- or large-scale damage. The framework extends the available regeneration knowledge with novel hypotheses to propose collective intelligent self-repair machines, with multi-level feedback neural control systems, driven by somatic and stem cells. We computationally implemented the framework to demonstrate the robust recovery of both anatomical and bioelectric homeostasis in an worm that, in a simple way, resembles the planarian. In the absence of complete regeneration knowledge, the framework contributes to understanding and generating hypotheses for stem cell mediated form and function regeneration which may help advance regenerative medicine and synthetic biology. Further, as our framework is a bio-inspired and bio-computing self-repair machine, it may be useful for building self-repair robots/biobots and artificial self-repair systems

    Automatic Generation of Efficient Linear Algebra Programs

    Full text link
    The level of abstraction at which application experts reason about linear algebra computations and the level of abstraction used by developers of high-performance numerical linear algebra libraries do not match. The former is conveniently captured by high-level languages and libraries such as Matlab and Eigen, while the latter expresses the kernels included in the BLAS and LAPACK libraries. Unfortunately, the translation from a high-level computation to an efficient sequence of kernels is a task, far from trivial, that requires extensive knowledge of both linear algebra and high-performance computing. Internally, almost all high-level languages and libraries use efficient kernels; however, the translation algorithms are too simplistic and thus lead to a suboptimal use of said kernels, with significant performance losses. In order to both achieve the productivity that comes with high-level languages, and make use of the efficiency of low level kernels, we are developing Linnea, a code generator for linear algebra problems. As input, Linnea takes a high-level description of a linear algebra problem and produces as output an efficient sequence of calls to high-performance kernels. In 25 application problems, the code generated by Linnea always outperforms Matlab, Julia, Eigen and Armadillo, with speedups up to and exceeding 10x

    Towards Building Deep Networks with Bayesian Factor Graphs

    Full text link
    We propose a Multi-Layer Network based on the Bayesian framework of the Factor Graphs in Reduced Normal Form (FGrn) applied to a two-dimensional lattice. The Latent Variable Model (LVM) is the basic building block of a quadtree hierarchy built on top of a bottom layer of random variables that represent pixels of an image, a feature map, or more generally a collection of spatially distributed discrete variables. The multi-layer architecture implements a hierarchical data representation that, via belief propagation, can be used for learning and inference. Typical uses are pattern completion, correction and classification. The FGrn paradigm provides great flexibility and modularity and appears as a promising candidate for building deep networks: the system can be easily extended by introducing new and different (in cardinality and in type) variables. Prior knowledge, or supervised information, can be introduced at different scales. The FGrn paradigm provides a handy way for building all kinds of architectures by interconnecting only three types of units: Single Input Single Output (SISO) blocks, Sources and Replicators. The network is designed like a circuit diagram and the belief messages flow bidirectionally in the whole system. The learning algorithms operate only locally within each block. The framework is demonstrated in this paper in a three-layer structure applied to images extracted from a standard data set.Comment: Submitted for journal publicatio
    corecore