185 research outputs found
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
Prediction error-driven memory consolidation for continual learning. On the case of adaptive greenhouse models
This work presents an adaptive architecture that performs online learning and
faces catastrophic forgetting issues by means of episodic memories and
prediction-error driven memory consolidation. In line with evidences from the
cognitive science and neuroscience, memories are retained depending on their
congruency with the prior knowledge stored in the system. This is estimated in
terms of prediction error resulting from a generative model. Moreover, this AI
system is transferred onto an innovative application in the horticulture
industry: the learning and transfer of greenhouse models. This work presents a
model trained on data recorded from research facilities and transferred to a
production greenhouse.Comment: Revised version. Paper under review, submitted to Springer German
Journal on Artificial Intelligence (K\"unstliche Intelligenz), Special Issue
on Developmental Robotic
Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network
© 2018 The Authors Recent advances in behavioural and computational neuroscience, cognitive robotics, and in the hardware implementation of large-scale neural networks, provide the opportunity for an accelerated understanding of brain functions and for the design of interactive robotic systems based on brain-inspired control systems. This is especially the case in the domain of action and language learning, given the significant scientific and technological developments in this field. In this work we describe how a neuroanatomically grounded spiking neural network for visual attention has been extended with a word learning capability and integrated with the iCub humanoid robot to demonstrate attention-led object naming. Experiments were carried out with both a simulated and a real iCub robot platform with successful results. The iCub robot is capable of associating a label to an object with a ‘preferred’ orientation when visual and word stimuli are presented concurrently in the scene, as well as attending to said object, thus naming it. After learning is complete, the name of the object can be recalled successfully when only the visual input is present, even when the object has been moved from its original position or when other objects are present as distractors
Development of a Large-Scale Integrated Neurocognitive Architecture - Part 2: Design and Architecture
In Part 1 of this report, we outlined a framework for creating an intelligent agent
based upon modeling the large-scale functionality of the human brain. Building on
those results, we begin Part 2 by specifying the behavioral requirements of a
large-scale neurocognitive architecture. The core of our long-term approach remains
focused on creating a network of neuromorphic regions that provide the mechanisms
needed to meet these requirements. However, for the short term of the next few years,
it is likely that optimal results will be obtained by using a hybrid design that
also includes symbolic methods from AI/cognitive science and control processes from the
field of artificial life. We accordingly propose a three-tiered architecture that
integrates these different methods, and describe an ongoing computational study of a
prototype 'mini-Roboscout' based on this architecture. We also examine the implications
of some non-standard computational methods for developing a neurocognitive agent.
This examination included computational experiments assessing the effectiveness of
genetic programming as a design tool for recurrent neural networks for sequence
processing, and experiments measuring the speed-up obtained for adaptive neural
networks when they are executed on a graphical processing unit (GPU) rather than a
conventional CPU. We conclude that the implementation of a large-scale neurocognitive
architecture is feasible, and outline a roadmap for achieving this goal
Machine learning and its applications in reliability analysis systems
In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA
Where Brain, Body and World Collide
The brain fascinates because it is the biological organ of mindfulness itself. It is the inner engine that drives intelligent behavior. Such a depiction provides a worthy antidote to the once-popular vision of the mind as somehow lying outside the natural order. But it is a vision with a price. For it has concentrated much theoretical attention on an uncomfortably restricted space; the space of the inner neural machine, divorced from the wider world which then enters the story only via the hygienic gateways of perception and action. Recent work in neuroscience, robotics and psychology casts doubt on the effectiveness of such a shrunken perspective. Instead, it stresses the unexpected intimacy of brain, body and world and invites us to attend to the structure and dynamics of extended adaptive systems -- ones involving a much wider variety of factors and forces. Whilst it needs to be handled with some caution, I believe there is much to be learnt from this broader vision. The mind itself, if such a vision is correct, is best understood as the activity of an essentially situated brain: a brain at home in its proper bodily, cultural and environmental niche
- …