120 research outputs found

    Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience

    Full text link
    Physical symbol systems are needed for open-ended cognition. A good way to understand physical symbol systems is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality. The state of the art in cognitive architectures for open-ended cognition is critically assessed. I conclude that a cognitive architecture that evolves symbol structures in the brain is a promising candidate to explain open-ended cognition. Part 2 of the paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living Machines 2013 Natural History Museum, Londo

    Gridbot: An autonomous robot controlled by a Spiking Neural Network mimicking the brain's navigational system

    Full text link
    It is true that the "best" neural network is not necessarily the one with the most "brain-like" behavior. Understanding biological intelligence, however, is a fundamental goal for several distinct disciplines. Translating our understanding of intelligence to machines is a fundamental problem in robotics. Propelled by new advancements in Neuroscience, we developed a spiking neural network (SNN) that draws from mounting experimental evidence that a number of individual neurons is associated with spatial navigation. By following the brain's structure, our model assumes no initial all-to-all connectivity, which could inhibit its translation to a neuromorphic hardware, and learns an uncharted territory by mapping its identified components into a limited number of neural representations, through spike-timing dependent plasticity (STDP). In our ongoing effort to employ a bioinspired SNN-controlled robot to real-world spatial mapping applications, we demonstrate here how an SNN may robustly control an autonomous robot in mapping and exploring an unknown environment, while compensating for its own intrinsic hardware imperfections, such as partial or total loss of visual input.Comment: 8 pages, 3 Figures, International Conference on Neuromorphic Systems (ICONS 2018

    Event-Driven Technologies for Reactive Motion Planning: Neuromorphic Stereo Vision and Robot Path Planning and Their Application on Parallel Hardware

    Get PDF
    Die Robotik wird immer mehr zu einem SchlĂŒsselfaktor des technischen Aufschwungs. Trotz beeindruckender Fortschritte in den letzten Jahrzehnten, ĂŒbertreffen Gehirne von SĂ€ugetieren in den Bereichen Sehen und Bewegungsplanung noch immer selbst die leistungsfĂ€higsten Maschinen. Industrieroboter sind sehr schnell und prĂ€zise, aber ihre Planungsalgorithmen sind in hochdynamischen Umgebungen, wie sie fĂŒr die Mensch-Roboter-Kollaboration (MRK) erforderlich sind, nicht leistungsfĂ€hig genug. Ohne schnelle und adaptive Bewegungsplanung kann sichere MRK nicht garantiert werden. Neuromorphe Technologien, einschließlich visueller Sensoren und Hardware-Chips, arbeiten asynchron und verarbeiten so raum-zeitliche Informationen sehr effizient. Insbesondere ereignisbasierte visuelle Sensoren sind konventionellen, synchronen Kameras bei vielen Anwendungen bereits ĂŒberlegen. Daher haben ereignisbasierte Methoden ein großes Potenzial, schnellere und energieeffizientere Algorithmen zur Bewegungssteuerung in der MRK zu ermöglichen. In dieser Arbeit wird ein Ansatz zur flexiblen reaktiven Bewegungssteuerung eines Roboterarms vorgestellt. Dabei wird die Exterozeption durch ereignisbasiertes Stereosehen erreicht und die Pfadplanung ist in einer neuronalen ReprĂ€sentation des Konfigurationsraums implementiert. Die Multiview-3D-Rekonstruktion wird durch eine qualitative Analyse in Simulation evaluiert und auf ein Stereo-System ereignisbasierter Kameras ĂŒbertragen. Zur Evaluierung der reaktiven kollisionsfreien Online-Planung wird ein Demonstrator mit einem industriellen Roboter genutzt. Dieser wird auch fĂŒr eine vergleichende Studie zu sample-basierten Planern verwendet. ErgĂ€nzt wird dies durch einen Benchmark von parallelen Hardwarelösungen wozu als Testszenario Bahnplanung in der Robotik gewĂ€hlt wurde. Die Ergebnisse zeigen, dass die vorgeschlagenen neuronalen Lösungen einen effektiven Weg zur Realisierung einer Robotersteuerung fĂŒr dynamische Szenarien darstellen. Diese Arbeit schafft eine Grundlage fĂŒr neuronale Lösungen bei adaptiven Fertigungsprozesse, auch in Zusammenarbeit mit dem Menschen, ohne Einbußen bei Geschwindigkeit und Sicherheit. Damit ebnet sie den Weg fĂŒr die Integration von dem Gehirn nachempfundener Hardware und Algorithmen in die Industrierobotik und MRK

    Inspired by nature: timescale-free and grid-free event-based computing with\ua0spiking neural networks

    Get PDF
    Computer vision is enjoying huge success in visual processing applications such as facial recognition, object identification, and navigation. Most of these studies work with traditional cameras which produce frames at predetermined fixed time intervals. Real life visual stimuli are, however, generated when changes occur in the environment and are irregular in timing. Biological visual neural systems operate on these changes and are hence free from any fixed timescales that are related to the timing of events in visual input.Inspired by biological systems, neuromorphic devices provide a new way to record visual\ua0data. These devices typically have parallel arrays of sensors which operate asynchronously. They have particular potential for robotics due to their low latency, efficient use of bandwidth and low power requirements. There are a variety of neuromorphic devices for detecting different sensory information; this thesis focuses on using the Dynamic Vision Sensor (DVS) for visual data collection.Event-based sensory inputs are generated on demand as changes happen in the environment. There are no systematic timescales in these activities and the asynchronous nature of the sensors adds to the irregularity of time intervals between events, making event-based data timescale-free. Although the array of sensors are arranged as a grid in vision sensors generally, events in the real world exist in continuous space. Biological systems are not restricted to grid-based sampling, and it is an open question whether event-based data could similarly take advantage of grid-free processing algorithms. To study visual data in a way which is timescale-free and grid-free, which is\ua0 fundamentally different from traditional video data sampled at fixed time intervals which are dense and rigid in space, requires conceptual viewpoints and methods of computation which are not typically employed in existing studies.Bio-inspired computing involves computational components that mimic or at least take inspiration from how nature works. This fusion of engineering and biology often provides insights into complex computational problems. Artificial neural networks, a computing paradigm that is inspired by how our brains work, have been studied widely with visual data. This thesis uses a type of artificial neural network—event-based spiking neural networks—as the basic framework to process event-based visual data.Building upon spiking neural networks, this thesis introduces two methods that process event-based data with the principles of being timescale-free and grid-free. The first method preprocesses events as distributions of Gaussian shaped spatiotemporal volumes, and then introduces a new neuron model with time-delayed dendrites and dendritic and axonal computation as the main building blocks of the spiking neural network to perform long-term predictions. Gaussians are used for simplicity purposes. This Gaussian-based method is shown in this thesis to outperform a commonly used iterative prediction paradigm on DVS data.The second method involves a new concept for processing event-based data based on the “light cone” idea in physics. Starting from a given point in real space at a given time, a light cone is the set of points in spacetime reachable without exceeding the speed of light, and these points trace out spacetime trajectories called world lines. The light cone concept is applied to DVS data. As an object moves with respect to the DVS, the events generated are related by their speeds relative to the DVS. An observer can calculate possible world lines for each point but has no access to the correct one. The idea of a “motion cone” is introduced to refer to the distribution of possible world lines for an event. Motion cones provide a novel theory for the early stages of visual processing. Instead of spatial clustering, world lines produce a new representation determined by a speed-based clustering of events. A novel spiking neural network model with dendritic connections based on motion cones is proposed, with the ability predict future motion pattern in a long-term prediction.Freedom from timescales and fixed grid sizes are fundamental characteristics of neuromorphic event-based data but few algorithms to date exploit their potential. Focusing on the inter-event relationship in the continuous spatiotemporal volume can preserve these features during processing. This thesis presents two examples of incorporating the features of being timescale-free and grid-free into algorithm development and examines their performance on real world DVS data. These new concepts and models contribute to the neuromorphic computation field by providing new ways of thinking about event-based representations and their associated algorithms. They also have the potential to stimulate rethinking of representations in the early stages of an event-based vision system. To aid algorithm development, a benchmarking data set containing data ranging from simple environment changes collected from a stationary camera to complex environmentally rich navigation performed by mobile robots has been collated. Studies conducted in this thesis use examples from this benchmarking data set which is also made available to the public

    The Development of Bio-Inspired Cortical Feature Maps for Robot Sensorimotor Controllers

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This project applies principles from the field of Computational Neuroscience to Robotics research, in particular to develop systems inspired by how nature manages to solve sensorimotor coordination tasks. The overall aim has been to build a self-organising sensorimotor system using biologically inspired techniques based upon human cortical development which can in the future be implemented in neuromorphic hardware. This can then deliver the benefits of low power consumption and real time operation but with flexible learning onboard autonomous robots. A core principle is the Self-Organising Feature Map which is based upon the theory of how 2D maps develop in real cortex to represent complex information from the environment. A framework for developing feature maps for both motor and visual directional selectivity representing eight different directions of motion is described as well as how they can be coupled together to make a basic visuomotor system. In contrast to many previous works which use artificially generated visual inputs (for example, image sequences of oriented moving bars or mathematically generated Gaussian bars) a novel feature of the current work is that the visual input is generated by a DVS 128 silicon retina camera which is a neuromorphic device and produces spike events in a frame-free way. One of the main contributions of this work has been to develop a method of autonomous regulation of the map development process which adapts the learning dependent upon input activity. The main results show that distinct directionally selective maps for both the motor and visual modalities are produced under a range of experimental scenarios. The adaptive learning process successfully controls the rate of learning in both motor and visual map development and is used to indicate when sufficient patterns have been presented, thus avoiding the need to define in advance the quantity and range of training data. The coupling training experiments show that the visual input learns to modulate the original motor map response, creating a new visual-motor topological map.EPSRC, University of Plymouth Graduate Schoo
    • 

    corecore