1,124 research outputs found
Experimental study of artificial neural networks using a digital memristor simulator
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper presents a fully digital implementation of a memristor hardware simulator, as the core of an emulator, based on a behavioral model of voltage-controlled threshold-type bipolar memristors. Compared to other analog solutions, the proposed digital design is compact, easily reconfigurable, demonstrates very good matching with the mathematical model on which it is based, and complies with all the required features for memristor emulators. We validated its functionality using Altera Quartus II and ModelSim tools targeting low-cost yet powerful field programmable gate array (FPGA) families. We tested its suitability for complex memristive circuits as well as its synapse functioning in artificial neural networks (ANNs), implementing examples of associative memory and unsupervised learning of spatio-temporal correlations in parallel input streams using a simplified STDP. We provide the full circuit schematics of all our digital circuit designs and comment on the required hardware resources and their scaling trends, thus presenting a design framework for applications based on our hardware simulator.Peer ReviewedPostprint (author's final draft
Computational models in the age of large datasets.
Technological advances in experimental neuroscience are generating vast quantities of data, from the dynamics of single molecules to the structure and activity patterns of large networks of neurons. How do we make sense of these voluminous, complex, disparate and often incomplete data? How do we find general principles in the morass of detail? Computational models are invaluable and necessary in this task and yield insights that cannot otherwise be obtained. However, building and interpreting good computational models is a substantial challenge, especially so in the era of large datasets. Fitting detailed models to experimental data is difficult and often requires onerous assumptions, while more loosely constrained conceptual models that explore broad hypotheses and principles can yield more useful insights.Charles A King TrustThis is the author accepted manuscript. The final version is available from Elsevier via http://dx.doi.org/10.1016/j.conb.2015.01.00
Adaptive networks for robotics and the emergence of reward anticipatory circuits
Currently the central challenge facing evolutionary robotics is to determine
how best to extend the range and complexity of behaviour supported by evolved
neural systems. Implicit in the work described in this thesis is the idea that this
might best be achieved through devising neural circuits (tractable to evolutionary
exploration) that exhibit complementary functional characteristics. We concentrate
on two problem domains; locomotion and sequence learning. For locomotion
we compare the use of GasNets and other adaptive networks. For sequence learning
we introduce a novel connectionist model inspired by the role of dopamine
in the basal ganglia (commonly interpreted as a form of reinforcement learning).
This connectionist approach relies upon a new neuron model inspired by notions
of energy efficient signalling. Two reward adaptive circuit variants were investigated.
These were applied respectively to two learning problems; where action
sequences are required to take place in a strict order, and secondly, where action
sequences are robust to intermediate arbitrary states. We conclude the thesis
by proposing a formal model of functional integration, encompassing locomotion
and sequence learning, extending ideas proposed by W. Ross Ashby.
A general model of the adaptive replicator is presented, incoporating subsystems
that are tuned to continuous variation and discrete or conditional events.
Comparisons are made with Ross W. Ashby's model of ultrastability and his
ideas on adaptive behaviour. This model is intended to support our assertion
that, GasNets (and similar networks) and reward adaptive circuits of the type
presented here, are intrinsically complementary. In conclusion we present some
ideas on how the co-evolution of GasNet and reward adaptive circuits might lead
us to significant improvements in the synthesis of agents capable of exhibiting
complex adaptive behaviour
A Review of Findings from Neuroscience and Cognitive Psychology as Possible Inspiration for the Path to Artificial General Intelligence
This review aims to contribute to the quest for artificial general
intelligence by examining neuroscience and cognitive psychology methods for
potential inspiration. Despite the impressive advancements achieved by deep
learning models in various domains, they still have shortcomings in abstract
reasoning and causal understanding. Such capabilities should be ultimately
integrated into artificial intelligence systems in order to surpass data-driven
limitations and support decision making in a way more similar to human
intelligence. This work is a vertical review that attempts a wide-ranging
exploration of brain function, spanning from lower-level biological neurons,
spiking neural networks, and neuronal ensembles to higher-level concepts such
as brain anatomy, vector symbolic architectures, cognitive and categorization
models, and cognitive architectures. The hope is that these concepts may offer
insights for solutions in artificial general intelligence.Comment: 143 pages, 49 figures, 244 reference
Computational Logic for Biomedicine and Neurosciences
We advocate here the use of computational logic for systems biology, as a
\emph{unified and safe} framework well suited for both modeling the dynamic
behaviour of biological systems, expressing properties of them, and verifying
these properties. The potential candidate logics should have a traditional
proof theoretic pedigree (including either induction, or a sequent calculus
presentation enjoying cut-elimination and focusing), and should come with
certified proof tools. Beyond providing a reliable framework, this allows the
correct encodings of our biological systems. % For systems biology in general
and biomedicine in particular, we have so far, for the modeling part, three
candidate logics: all based on linear logic. The studied properties and their
proofs are formalized in a very expressive (non linear) inductive logic: the
Calculus of Inductive Constructions (CIC). The examples we have considered so
far are relatively simple ones; however, all coming with formal semi-automatic
proofs in the Coq system, which implements CIC. In neuroscience, we are
directly using CIC and Coq, to model neurons and some simple neuronal circuits
and prove some of their dynamic properties. % In biomedicine, the study of
multi omic pathway interactions, together with clinical and electronic health
record data should help in drug discovery and disease diagnosis. Future work
includes using more automatic provers. This should enable us to specify and
study more realistic examples, and in the long term to provide a system for
disease diagnosis and therapy prognosis
Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites
This paper presents a spike-based model which employs neurons with
functionally distinct dendritic compartments for classifying high dimensional
binary patterns. The synaptic inputs arriving on each dendritic subunit are
nonlinearly processed before being linearly integrated at the soma, giving the
neuron a capacity to perform a large number of input-output mappings. The model
utilizes sparse synaptic connectivity; where each synapse takes a binary value.
The optimal connection pattern of a neuron is learned by using a simple
hardware-friendly, margin enhancing learning algorithm inspired by the
mechanism of structural plasticity in biological neurons. The learning
algorithm groups correlated synaptic inputs on the same dendritic branch. Since
the learning results in modified connection patterns, it can be incorporated
into current event-based neuromorphic systems with little overhead. This work
also presents a branch-specific spike-based version of this structural
plasticity rule. The proposed model is evaluated on benchmark binary
classification problems and its performance is compared against that achieved
using Support Vector Machine (SVM) and Extreme Learning Machine (ELM)
techniques. Our proposed method attains comparable performance while utilizing
10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio
- …