4 research outputs found
A Reinforcement Learning Framework for Spiking Networks with Dynamic Synapses
An integration of both the Hebbian-based and reinforcement learning (RL) rules is presented for dynamic synapses. The proposed framework permits the Hebbian rule to update the hidden synaptic model parameters regulating the synaptic response rather than the synaptic weights. This is performed using both the value and the sign of the temporal difference in the reward signal after each trial. Applying this framework, a spiking network with spike-timing-dependent synapses is tested to learn the exclusive-OR computation on a temporally coded basis. Reward values are calculated with the distance between the output spike train of the network and a reference target one. Results show that the network is able to capture the required dynamics and that the proposed framework can reveal indeed an integrated version of Hebbian and RL. The proposed framework is tractable and less computationally expensive. The framework is applicable to a wide class of synaptic models and is not restricted to the used neural representation. This generality, along with the reported results, supports adopting the introduced approach to benefit from the biologically plausible synaptic models in a wide range of intuitive signal processing
Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning
We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q-Learning algorithm to get control policy Ï in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score
Towards a Brain-inspired Information Processing System: Modelling and Analysis of Synaptic Dynamics: Towards a Brain-inspired InformationProcessing System: Modelling and Analysis ofSynaptic Dynamics
Biological neural systems (BNS) in general and the central nervous system (CNS) specifically
exhibit a strikingly efficient computational power along with an extreme flexible and adaptive basis
for acquiring and integrating new knowledge. Acquiring more insights into the actual mechanisms
of information processing within the BNS and their computational capabilities is a core objective
of modern computer science, computational sciences and neuroscience. Among the main reasons
of this tendency to understand the brain is to help in improving the quality of life of people suffer
from loss (either partial or complete) of brain or spinal cord functions. Brain-computer-interfaces
(BCI), neural prostheses and other similar approaches are potential solutions either to help these
patients through therapy or to push the progress in rehabilitation. There is however a significant
lack of knowledge regarding the basic information processing within the CNS. Without a better
understanding of the fundamental operations or sequences leading to cognitive abilities, applications
like BCI or neural prostheses will keep struggling to find a proper and systematic way to
help patients in this regard. In order to have more insights into these basic information processing
methods, this thesis presents an approach that makes a formal distinction between the essence
of being intelligent (as for the brain) and the classical class of artificial intelligence, e.g. with
expert systems. This approach investigates the underlying mechanisms allowing the CNS to be
capable of performing a massive amount of computational tasks with a sustainable efficiency and
flexibility. This is the essence of being intelligent, i.e. being able to learn, adapt and to invent.
The approach used in the thesis at hands is based on the hypothesis that the brain or specifically a
biological neural circuitry in the CNS is a dynamic system (network) that features emergent capabilities.
These capabilities can be imported into spiking neural networks (SNN) by emulating the
dynamic neural system. Emulating the dynamic system requires simulating both the inner workings
of the system and the framework of performing the information processing tasks. Thus, this
work comprises two main parts. The first part is concerned with introducing a proper and a novel
dynamic synaptic model as a vital constitute of the inner workings of the dynamic neural system.
This model represents a balanced integration between the needed biophysical details and being
computationally inexpensive. Being a biophysical model is important to allow for the abilities of
the target dynamic system to be inherited, and being simple is needed to allow for further implementation
in large scale simulations and for hardware implementation in the future. Besides, the
energy related aspects of synaptic dynamics are studied and linked to the behaviour of the networks
seeking for stable states of activities. The second part of the thesis is consequently concerned with
importing the processing framework of the dynamic system into the environment of SNN. This
part of the study investigates the well established concept of binding by synchrony to solve the information binding problem and to proposes the concept of synchrony states within SNN. The
concepts of computing with states are extended to investigate a computational model that is based
on the finite-state machines and reservoir computing. Biological plausible validations of the introduced
model and frameworks are performed. Results and discussions of these validations indicate
that this study presents a significant advance on the way of empowering the knowledge about the
mechanisms underpinning the computational power of CNS. Furthermore it shows a roadmap on
how to adopt the biological computational capabilities in computation science in general and in
biologically-inspired spiking neural networks in specific. Large scale simulations and the development
of neuromorphic hardware are work-in-progress and future work. Among the applications
of the introduced work are neural prostheses and bionic automation systems
Integrative (Synchronisations-)Mechanismen der (Neuro-)Kognition vor dem Hintergrund des (Neo-)Konnektionismus, der Theorie der nichtlinearen dynamischen Systeme, der Informationstheorie und des Selbstorganisationsparadigmas
Der Gegenstand der vorliegenden Arbeit besteht darin, aufbauend auf dem (Haupt-)Thema, der Darlegung und Untersuchung der Lösung des Bindungsproblems anhand von temporalen integrativen (Synchronisations-)Mechanismen im Rahmen der kognitiven (Neuro-)Architekturen im (Neo-)Konnektionismus mit Bezug auf die Wahrnehmungs- und Sprachkognition, vor allem mit Bezug auf die dabei auftretende KompositionalitĂ€ts- und SystematizitĂ€tsproblematik, die Konstruktion einer noch zu entwickelnden integrativen Theorie der (Neuro-)Kognition zu skizzie-ren, auf der Basis des ReprĂ€sentationsformats einer sog. âvektoriellen Formâ, u.z. vor dem Hintergrund des (Neo-)Konnektionismus, der Theorie der nichtlinearen dynamischen Systeme, der Informationstheorie und des Selbstorganisations-Paradigmas