104 research outputs found

    Scaling up a Boltzmann machine model of hippocampus with visual features for mobile robots

    Get PDF
    Previous papers [4], [5] have described a detailed mapping between biological hippocampal navigation and a temporal restricted Boltzmann machine [20] with unitary coherent particle filtering. These models have focused on the biological structures and used simplified microworlds in implemented examples. As a first step in scaling the model up towards practical bio-inspired robotic navigation, we present new results with the model applied to real world visual data, though still limited by a discretized configuration space. To extract useful features from visual input we apply the SURF transform followed by a new lamellae-based winner-take-all Dentate Gyrus. This new visual processing stream allows the navigation system to function without the need for a simplifying data assumption of the previous models, and brings the hippocampal model closer to being a practical robotic navigation system

    Machines Learning - Towards a New Synthetic Autobiographical Memory

    Get PDF
    Autobiographical memory is the organisation of episodes and contextual information from an individual’s experiences into a coherent narrative, which is key to our sense of self. Formation and recall of autobiographical memories is essential for effective, adaptive behaviour in the world, providing contextual information necessary for planning actions and memory functions such as event reconstruction. A synthetic autobiographical memory system would endow intelligent robotic agents with many essential components of cognition through active compression and storage of historical sensorimotor data in an easily addressable manner. Current approaches neither fulfil these functional requirements, nor build upon recent understanding of predictive coding, deep learning, nor the neurobiology of memory. This position paper highlights desiderata for a modern implementation of synthetic autobiographical memory based on human episodic memory, and proposes that a recently developed model of hippocampal memory could be extended as a generalised model of autobiographical memory. Initial implementation will be targeted at social interaction, where current synthetic autobiographical memory systems have had success

    Scaling a hippocampus model with GPU parallelisation and test-driven refactoring

    Get PDF
    The hippocampus is the brain area used for localisation, mapping and episodic memory. Humans and animals can outperform robotic systems in these tasks, so functional models of hippocampus may be useful to improve robotic navigation, such as for self-driving cars. Previous work developed a biologically plausible model of hippocampus based on Unitary Coherent Particle Filter (UCPF) and Temporal Restricted Boltzmann Machine, which was able to learn to navigate around small test environments. However it was implemented in serial software, which becomes very slow as the environments and numbers of neurons scale up. Modern GPUs can parallelize execution of neural networks. The present Neural Software Engineering study develops a GPU accelerated version of the UCPF hippocampus software, using the formal Software Engineering techniques of profiling, optimisation and test-driven refactoring. Results show that the model can greatly benefit from parallel execution, which may enable it to scale from toy environments and applications to real-world ones such as self-driving car navigation. The refactored parallel code is released to the community as open source software as part of this publication

    Extending a Hippocampal Model for Navigation Around a Maze Generated from Real-World Data

    Get PDF
    An essential component in the formation of understanding is the ability to use past experience to comprehend the here and now, and to aid selection of future action. Past experience is stored as memories which are then available for recall at very short notice, allowing for understanding of short and long term action. Autobiographical memory (ABM) is a form of temporally organised memory and is the organisation of episodes and contextual information from an individual’s experience into a coherent narrative, which is key to a sense of self. Formation and recall of memories is essential for effective and adaptive behaviour in the world, providing contextual information necessary for planning actions and memory functions, such as event reconstruction. Here we tested and developed a previously defined computational memory model, based on hippocampal structure and function, as a first step towards developing a synthetic model of human ABM (SAM). The hippocampal model chosen has functions analogous to that of human ABM. We trained the model on real-world sensory data and demonstrate successful, biologically plausible memory formation and recall, in a navigational task. The hippocampal model will later be extended for application in a biologically inspired system for human-robot interaction

    Toward evolutionary and developmental intelligence

    Get PDF
    Given the phenomenal advances in artificial intelligence in specific domains like visual object recognition and game playing by deep learning, expectations are rising for building artificial general intelligence (AGI) that can flexibly find solutions in unknown task domains. One approach to AGI is to set up a variety of tasks and design AI agents that perform well in many of them, including those the agent faces for the first time. One caveat for such an approach is that the best performing agent may be just a collection of domain-specific AI agents switched for a given domain. Here we propose an alternative approach of focusing on the process of acquisition of intelligence through active interactions in an environment. We call this approach evolutionary and developmental intelligence (EDI). We first review the current status of artificial intelligence, brain-inspired computing and developmental robotics and define the conceptual framework of EDI. We then explore how we can integrate advances in neuroscience, machine learning, and robotics to construct EDI systems and how building such systems can help us understand animal and human intelligence

    Sample efficiency, transfer learning and interpretability for deep reinforcement learning

    Get PDF
    Deep learning has revolutionised artificial intelligence, where the application of increased compute to train neural networks on large datasets has resulted in improvements in real-world applications such as object detection, text-to-speech synthesis and machine translation. Deep reinforcement learning (DRL) has similarly shown impressive results in board and video games, but less so in real-world applications such as robotic control. To address this, I have investigated three factors prohibiting further deployment of DRL: sample efficiency, transfer learning, and interpretability. To decrease the amount of data needed to train DRL systems, I have explored various storage strategies and exploration policies for episodic control (EC) algorithms, resulting in the application of online clustering to improve the memory efficiency of EC algorithms, and the maximum entropy mellowmax policy for improving the sample efficiency and final performance of the same EC algorithms. To improve performance during transfer learning, I have shown that a multi-headed neural network architecture trained using hierarchical reinforcement learning can retain the benefits of positive transfer between tasks while mitigating the interference effects of negative transfer. I additionally investigated the use of multi-headed architectures to reduce catastrophic forgetting under the continual learning setting. While the use of multiple heads worked well within a simple environment, it was of limited use within a more complex domain, indicating that this strategy does not scale well. Finally, I applied a wide range of quantitative and qualitative techniques to better interpret trained DRL agents. In particular, I compared the effects of training DRL agents both with and without visual domain randomisation (DR), a popular technique to achieve simulation-to-real transfer, providing a series of tests that can be applied before real-world deployment. One of the major findings is that DR produces more entangled representations within trained DRL agents, indicating quantitatively that they are invariant to nuisance factors associated with the DR process. Additionally, while my environment allowed agents trained without DR to succeed without requiring complex recurrent processing, all agents trained with DR appear to integrate information over time, as evidenced through ablations on the recurrent state.Open Acces

    Intelligent control of mobile robot with redundant manipulator & stereovision: quantum / soft computing toolkit

    Get PDF
    The task of an intelligent control system design applying soft and quantum computational intelligence technologies discussed. An example of a control object as a mobile robot with redundant robotic manipulator and stereovision introduced. Design of robust knowledge bases is performed using a developed computational intelligence – quantum / soft computing toolkit (QC/SCOptKBTM). The knowledge base self-organization process of fuzzy homogeneous regulators through the application of end-to-end IT of quantum computing described. The coordination control between the mobile robot and redundant manipulator with stereovision based on soft computing described. The general design methodology of a generalizing control unit based on the physical laws of quantum computing (quantum information-thermodynamic trade-off of control quality distribution and knowledge base self-organization goal) is considered. The modernization of the pattern recognition system based on stereo vision technology presented. The effectiveness of the proposed methodology is demonstrated in comparison with the structures of control systems based on soft computing for unforeseen control situations with sensor system

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented
    • …
    corecore