11 research outputs found

    The Development of Bio-Inspired Cortical Feature Maps for Robot Sensorimotor Controllers

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This project applies principles from the field of Computational Neuroscience to Robotics research, in particular to develop systems inspired by how nature manages to solve sensorimotor coordination tasks. The overall aim has been to build a self-organising sensorimotor system using biologically inspired techniques based upon human cortical development which can in the future be implemented in neuromorphic hardware. This can then deliver the benefits of low power consumption and real time operation but with flexible learning onboard autonomous robots. A core principle is the Self-Organising Feature Map which is based upon the theory of how 2D maps develop in real cortex to represent complex information from the environment. A framework for developing feature maps for both motor and visual directional selectivity representing eight different directions of motion is described as well as how they can be coupled together to make a basic visuomotor system. In contrast to many previous works which use artificially generated visual inputs (for example, image sequences of oriented moving bars or mathematically generated Gaussian bars) a novel feature of the current work is that the visual input is generated by a DVS 128 silicon retina camera which is a neuromorphic device and produces spike events in a frame-free way. One of the main contributions of this work has been to develop a method of autonomous regulation of the map development process which adapts the learning dependent upon input activity. The main results show that distinct directionally selective maps for both the motor and visual modalities are produced under a range of experimental scenarios. The adaptive learning process successfully controls the rate of learning in both motor and visual map development and is used to indicate when sufficient patterns have been presented, thus avoiding the need to define in advance the quantity and range of training data. The coupling training experiments show that the visual input learns to modulate the original motor map response, creating a new visual-motor topological map.EPSRC, University of Plymouth Graduate Schoo

    OPTIMIZATION OF TIME-RESPONSE AND AMPLIFICATION FEATURES OF EGOTs FOR NEUROPHYSIOLOGICAL APPLICATIONS

    Get PDF
    In device engineering, basic neuron-to-neuron communication has recently inspired the development of increasingly structured and efficient brain-mimicking setups in which the information flow can be processed with strategies resembling physiological ones. This is possible thanks to the use of organic neuromorphic devices, which can share the same electrolytic medium and adjust reciprocal connection weights according to temporal features of the input signals. In a parallel - although conceptually deeply interconnected - fashion, device engineers are directing their efforts towards novel tools to interface the brain and to decipher its signalling strategies. This led to several technological advances which allow scientists to transduce brain activity and, piece by piece, to create a detailed map of its functions. This effort extends over a wide spectrum of length-scales, zooming out from neuron-to-neuron communication up to global activity of neural populations. Both these scientific endeavours, namely mimicking neural communication and transducing brain activity, can benefit from the technology of Electrolyte-Gated Organic Transistors (EGOTs). Electrolyte-Gated Organic Transistors (EGOTs) are low-power electronic devices that functionally integrate the electrolytic environment through the exploitation of organic mixed ionic-electronic conductors. This enables the conversion of ionic signals into electronic ones, making such architectures ideal building blocks for neuroelectronics. This has driven extensive scientific and technological investigation on EGOTs. Such devices have been successfully demonstrated both as transducers and amplifiers of electrophysiological activity and as neuromorphic units. These promising results arise from the fact that EGOTs are active devices, which widely extend their applicability window over the capabilities of passive electronics (i.e. electrodes) but pose major integration hurdles. Being transistors, EGOTs need two driving voltages to be operated. If, on the one hand, the presence of two voltages becomes an advantage for the modulation of the device response (e.g. for devising EGOT-based neuromorphic circuitry), on the other hand it can become detrimental in brain interfaces, since it may result in a non-null bias directly applied on the brain. If such voltage exceeds the electrochemical stability window of water, undesired faradic reactions may lead to critical tissue and/or device damage. This work addresses EGOTs applications in neuroelectronics from the above-described dual perspective, spanning from neuromorphic device engineering to in vivo brain-device interfaces implementation. The advantages of using three-terminal architectures for neuromorphic devices, achieving reversible fine-tuning of their response plasticity, are highlighted. Jointly, the possibility of obtaining a multilevel memory unit by acting on the gate potential is discussed. Additionally, a novel mode of operation for EGOTs is introduced, enabling full retention of amplification capability while, at the same time, avoiding the application of a bias in the brain. Starting on these premises, a novel set of ultra-conformable active micro-epicortical arrays is presented, which fully integrate in situ fabricated EGOT recording sites onto medical-grade polyimide substrates. Finally, a whole organic circuitry for signal processing is presented, exploiting ad-hoc designed organic passive components coupled with EGOT devices. This unprecedented approach provides the possibility to sort complex signals into their constitutive frequency components in real time, thereby delineating innovative strategies to devise organic-based functional building-blocks for brain-machine interfaces.Nell’ingegneria elettronica, la comunicazione di base tra neuroni ha recentemente ispirato lo sviluppo di configurazioni sempre più articolate ed efficienti che imitano il cervello, in cui il flusso di informazioni può essere elaborato con strategie simili a quelle fisiologiche. Ciò è reso possibile grazie all'uso di dispositivi neuromorfici organici, che possono condividere lo stesso mezzo elettrolitico e regolare i pesi delle connessioni reciproche in base alle caratteristiche temporali dei segnali in ingresso. In modo parallelo, gli ingegneri elettronici stanno dirigendo i loro sforzi verso nuovi strumenti per interfacciare il cervello e decifrare le sue strategie di comunicazione. Si è giunti così a diversi progressi tecnologici che consentono agli scienziati di trasdurre l'attività cerebrale e, pezzo per pezzo, di creare una mappa dettagliata delle sue funzioni. Entrambi questi ambiti scientifici, ovvero imitare la comunicazione neurale e trasdurre l'attività cerebrale, possono trarre vantaggio dalla tecnologia dei transistor organici a base elettrolitica (EGOT). I transistor organici a base elettrolitica (EGOT) sono dispositivi elettronici a bassa potenza che integrano funzionalmente l'ambiente elettrolitico attraverso lo sfruttamento di conduttori organici misti ionici-elettronici, i quali consentono di convertire i segnali ionici in segnali elettronici, rendendo tali dispositivi ideali per la neuroelettronica. Gli EGOT sono stati dimostrati con successo sia come trasduttori e amplificatori dell'attività elettrofisiologica e sia come unità neuromorfiche. Tali risultati derivano dal fatto che gli EGOT sono dispositivi attivi, al contrario dell'elettronica passiva (ad esempio gli elettrodi), ma pongono comunque qualche ostacolo alla loro integrazione in ambiente biologico. In quanto transistor, gli EGOT necessitano l'applicazione di due tensioni tra i suoi terminali. Se, da un lato, la presenza di due tensioni diventa un vantaggio per la modulazione della risposta del dispositivo (ad esempio, per l'ideazione di circuiti neuromorfici basati su EGOT), dall'altro può diventare dannosa quando gli EGOT vengono adoperati come sito di registrazione nelle interfacce cerebrali, poiché una tensione non nulla può essere applicata direttamente al cervello. Se tale tensione supera la finestra di stabilità elettrochimica dell'acqua, reazioni faradiche indesiderate possono manifestarsi, le quali potrebbero danneggiare i tessuti e/o il dispositivo. Questo lavoro affronta le applicazioni degli EGOT nella neuroelettronica dalla duplice prospettiva sopra descritta: ingegnerizzazione neuromorfica ed implementazione come interfacce neurali in applicazioni in vivo. Vengono evidenziati i vantaggi dell'utilizzo di architetture a tre terminali per i dispositivi neuromorfici, ottenendo una regolazione reversibile della loro plasticità di risposta. Si discute inoltre la possibilità di ottenere un'unità di memoria multilivello agendo sul potenziale di gate. Viene introdotta una nuova modalità di funzionamento per gli EGOT, che consente di mantenere la capacità di amplificazione e, allo stesso tempo, di evitare l'applicazione di una tensione all’interfaccia cervello-dispositivo. Partendo da queste premesse, viene presentata una nuova serie di array micro-epicorticali ultra-conformabili, che integrano completamente i siti di registrazione EGOT fabbricati in situ su substrati di poliimmide. Infine, viene proposto un circuito organico per l'elaborazione del segnale, sfruttando componenti passivi organici progettati ad hoc e accoppiati a dispositivi EGOT. Questo approccio senza precedenti offre la possibilità di filtrare e scomporre segnali complessi nelle loro componenti di frequenza costitutive in tempo reale, delineando così strategie innovative per concepire blocchi funzionali a base organica per le interfacce cervello-macchina

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    Sensor fusion in distributed cortical circuits

    Get PDF
    The substantial motion of the nature is to balance, to survive, and to reach perfection. The evolution in biological systems is a key signature of this quintessence. Survival cannot be achieved without understanding the surrounding world. How can a fruit fly live without searching for food, and thereby with no form of perception that guides the behavior? The nervous system of fruit fly with hundred thousand of neurons can perform very complicated tasks that are beyond the power of an advanced supercomputer. Recently developed computing machines are made by billions of transistors and they are remarkably fast in precise calculations. But these machines are unable to perform a single task that an insect is able to do by means of thousands of neurons. The complexity of information processing and data compression in a single biological neuron and neural circuits are not comparable with that of developed today in transistors and integrated circuits. On the other hand, the style of information processing in neural systems is also very different from that of employed by microprocessors which is mostly centralized. Almost all cognitive functions are generated by a combined effort of multiple brain areas. In mammals, Cortical regions are organized hierarchically, and they are reciprocally interconnected, exchanging the information from multiple senses. This hierarchy in circuit level, also preserves the sensory world within different levels of complexity and within the scope of multiple modalities. The main behavioral advantage of that is to understand the real-world through multiple sensory systems, and thereby to provide a robust and coherent form of perception. When the quality of a sensory signal drops, the brain can alternatively employ other information pathways to handle cognitive tasks, or even to calibrate the error-prone sensory node. Mammalian brain also takes a good advantage of multimodal processing in learning and development; where one sensory system helps another sensory modality to develop. Multisensory integration is considered as one of the main factors that generates consciousness in human. Although, we still do not know where exactly the information is consolidated into a single percept, and what is the underpinning neural mechanism of this process? One straightforward hypothesis suggests that the uni-sensory signals are pooled in a ploy-sensory convergence zone, which creates a unified form of perception. But it is hard to believe that there is just one single dedicated region that realizes this functionality. Using a set of realistic neuro-computational principles, I have explored theoretically how multisensory integration can be performed within a distributed hierarchical circuit. I argued that the interaction of cortical populations can be interpreted as a specific form of relation satisfaction in which the information preserved in one neural ensemble must agree with incoming signals from connected populations according to a relation function. This relation function can be seen as a coherency function which is implicitly learnt through synaptic strength. Apart from the fact that the real world is composed of multisensory attributes, the sensory signals are subject to uncertainty. This requires a cortical mechanism to incorporate the statistical parameters of the sensory world in neural circuits and to deal with the issue of inaccuracy in perception. I argued in this thesis how the intrinsic stochasticity of neural activity enables a systematic mechanism to encode probabilistic quantities within neural circuits, e.g. reliability, prior probability. The systematic benefit of neural stochasticity is well paraphrased by the problem of Duns Scotus paradox: imagine a donkey with a deterministic brain that is exposed to two identical food rewards. This may make the animal suffer and die starving because of indecision. In this thesis, I have introduced an optimal encoding framework that can describe the probability function of a Gaussian-like random variable in a pool of Poisson neurons. Thereafter a distributed neural model is proposed that can optimally combine conditional probabilities over sensory signals, in order to compute Bayesian Multisensory Causal Inference. This process is known as a complex multisensory function in the cortex. Recently it is found that this process is performed within a distributed hierarchy in sensory cortex. Our work is amongst the first successful attempts that put a mechanistic spotlight on understanding the underlying neural mechanism of Multisensory Causal Perception in the brain, and in general the theory of decentralized multisensory integration in sensory cortex. Engineering information processing concepts in the brain and developing new computing technologies have been recently growing. Neuromorphic Engineering is a new branch that undertakes this mission. In a dedicated part of this thesis, I have proposed a Neuromorphic algorithm for event-based stereoscopic fusion. This algorithm is anchored in the idea of cooperative computing that dictates the defined epipolar and temporal constraints of the stereoscopic setup, to the neural dynamics. The performance of this algorithm is tested using a pair of silicon retinas

    Sensor fusion in distributed cortical circuits

    Get PDF
    The substantial motion of the nature is to balance, to survive, and to reach perfection. The evolution in biological systems is a key signature of this quintessence. Survival cannot be achieved without understanding the surrounding world. How can a fruit fly live without searching for food, and thereby with no form of perception that guides the behavior? The nervous system of fruit fly with hundred thousand of neurons can perform very complicated tasks that are beyond the power of an advanced supercomputer. Recently developed computing machines are made by billions of transistors and they are remarkably fast in precise calculations. But these machines are unable to perform a single task that an insect is able to do by means of thousands of neurons. The complexity of information processing and data compression in a single biological neuron and neural circuits are not comparable with that of developed today in transistors and integrated circuits. On the other hand, the style of information processing in neural systems is also very different from that of employed by microprocessors which is mostly centralized. Almost all cognitive functions are generated by a combined effort of multiple brain areas. In mammals, Cortical regions are organized hierarchically, and they are reciprocally interconnected, exchanging the information from multiple senses. This hierarchy in circuit level, also preserves the sensory world within different levels of complexity and within the scope of multiple modalities. The main behavioral advantage of that is to understand the real-world through multiple sensory systems, and thereby to provide a robust and coherent form of perception. When the quality of a sensory signal drops, the brain can alternatively employ other information pathways to handle cognitive tasks, or even to calibrate the error-prone sensory node. Mammalian brain also takes a good advantage of multimodal processing in learning and development; where one sensory system helps another sensory modality to develop. Multisensory integration is considered as one of the main factors that generates consciousness in human. Although, we still do not know where exactly the information is consolidated into a single percept, and what is the underpinning neural mechanism of this process? One straightforward hypothesis suggests that the uni-sensory signals are pooled in a ploy-sensory convergence zone, which creates a unified form of perception. But it is hard to believe that there is just one single dedicated region that realizes this functionality. Using a set of realistic neuro-computational principles, I have explored theoretically how multisensory integration can be performed within a distributed hierarchical circuit. I argued that the interaction of cortical populations can be interpreted as a specific form of relation satisfaction in which the information preserved in one neural ensemble must agree with incoming signals from connected populations according to a relation function. This relation function can be seen as a coherency function which is implicitly learnt through synaptic strength. Apart from the fact that the real world is composed of multisensory attributes, the sensory signals are subject to uncertainty. This requires a cortical mechanism to incorporate the statistical parameters of the sensory world in neural circuits and to deal with the issue of inaccuracy in perception. I argued in this thesis how the intrinsic stochasticity of neural activity enables a systematic mechanism to encode probabilistic quantities within neural circuits, e.g. reliability, prior probability. The systematic benefit of neural stochasticity is well paraphrased by the problem of Duns Scotus paradox: imagine a donkey with a deterministic brain that is exposed to two identical food rewards. This may make the animal suffer and die starving because of indecision. In this thesis, I have introduced an optimal encoding framework that can describe the probability function of a Gaussian-like random variable in a pool of Poisson neurons. Thereafter a distributed neural model is proposed that can optimally combine conditional probabilities over sensory signals, in order to compute Bayesian Multisensory Causal Inference. This process is known as a complex multisensory function in the cortex. Recently it is found that this process is performed within a distributed hierarchy in sensory cortex. Our work is amongst the first successful attempts that put a mechanistic spotlight on understanding the underlying neural mechanism of Multisensory Causal Perception in the brain, and in general the theory of decentralized multisensory integration in sensory cortex. Engineering information processing concepts in the brain and developing new computing technologies have been recently growing. Neuromorphic Engineering is a new branch that undertakes this mission. In a dedicated part of this thesis, I have proposed a Neuromorphic algorithm for event-based stereoscopic fusion. This algorithm is anchored in the idea of cooperative computing that dictates the defined epipolar and temporal constraints of the stereoscopic setup, to the neural dynamics. The performance of this algorithm is tested using a pair of silicon retinas

    Connectome-Constrained Artificial Neural Networks

    Get PDF
    In biological neural networks (BNNs), structure provides a set of guard rails by which function is constrained to solve tasks effectively, handle multiple stimuli simultaneously, adapt to noise and input variations, and preserve energy expenditure. Such features are desirable for artificial neural networks (ANNs), which are, unlike their organic counterparts, practically unbounded, and in many cases, initialized with random weights or arbitrary structural elements. In this dissertation, we consider an inductive base case for imposing BNN constraints onto ANNs. We select explicit connectome topologies from the fruit fly (one of the smallest BNNs) and impose these onto a multilayer perceptron (MLP) and a reservoir computer (RC), in order to craft “fruit fly neural networks” (FFNNs). We study the impact on performance, variance, and prediction dynamics from using FFNNs compared to non-FFNN models on odour classification, chaotic time-series prediction, and multifunctionality tasks. From a series of four experimental studies, we observe that the fly olfactory brain is aligned towards recalling and making predictions from chaotic input data, with a capacity for executing two mutually exclusive tasks from distinct initial conditions, and with low sensitivity to hyperparameter fluctuations that can lead to chaotic behaviour. We also observe that the clustering coefficient of the fly network, and its particular non-zero weight positions, are important for reducing model variance. These findings suggest that BNNs have distinct advantages over arbitrarily-weighted ANNs; notably, from their structure alone. More work with connectomes drawn across species will be useful in finding shared topological features which can further enhance ANNs, and Machine Learning overall

    Interpersonal synchrony and network dynamics in social interaction [Special issue]

    Get PDF

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore