210,030 research outputs found
Spiking Neural Networks -- Part I: Detecting Spatial Patterns
Spiking Neural Networks (SNNs) are biologically inspired machine learning
models that build on dynamic neuronal models processing binary and sparse
spiking signals in an event-driven, online, fashion. SNNs can be implemented on
neuromorphic computing platforms that are emerging as energy-efficient
co-processors for learning and inference. This is the first of a series of
three papers that introduce SNNs to an audience of engineers by focusing on
models, algorithms, and applications. In this first paper, we first cover
neural models used for conventional Artificial Neural Networks (ANNs) and SNNs.
Then, we review learning algorithms and applications for SNNs that aim at
mimicking the functionality of ANNs by detecting or generating spatial patterns
in rate-encoded spiking signals. We specifically discuss ANN-to-SNN conversion
and neural sampling. Finally, we validate the capabilities of SNNs for
detecting and generating spatial patterns through experiments.Comment: Submitte
Modeling human behavior in user-adaptive systems: recent advances using soft computing techniques
Adaptive Hypermedia systems are becoming more important in our everyday activities and users are expecting more intelligent services from them. The key element of a generic adaptive hypermedia system is the user model. Traditional machine learning techniques used to create user models are usually too rigid to capture the inherent uncertainty of human behavior. In this context, soft computing techniques can be used to handle and process human uncertainty and to simulate human decision-making. This paper examines how soft computing techniques, including fuzzy logic, neural networks, genetic algorithms, fuzzy clustering and neuro-fuzzy systems, have been used, alone or in combination with other machine learning techniques, for user modeling from 1999 to 2004. For each technique, its main applications, limitations and future directions for user modeling are presented. The paper also presents guidelines that show which soft computing techniques should be used according to the task implemented by the application
Learning through structure: towards deep neuromorphic knowledge graph embeddings
Computing latent representations for graph-structured data is an ubiquitous
learning task in many industrial and academic applications ranging from
molecule synthetization to social network analysis and recommender systems.
Knowledge graphs are among the most popular and widely used data
representations related to the Semantic Web. Next to structuring factual
knowledge in a machine-readable format, knowledge graphs serve as the backbone
of many artificial intelligence applications and allow the ingestion of context
information into various learning algorithms. Graph neural networks attempt to
encode graph structures in low-dimensional vector spaces via a message passing
heuristic between neighboring nodes. Over the recent years, a multitude of
different graph neural network architectures demonstrated ground-breaking
performances in many learning tasks. In this work, we propose a strategy to map
deep graph learning architectures for knowledge graph reasoning to neuromorphic
architectures. Based on the insight that randomly initialized and untrained
(i.e., frozen) graph neural networks are able to preserve local graph
structures, we compose a frozen neural network with shallow knowledge graph
embedding models. We experimentally show that already on conventional computing
hardware, this leads to a significant speedup and memory reduction while
maintaining a competitive performance level. Moreover, we extend the frozen
architecture to spiking neural networks, introducing a novel, event-based and
highly sparse knowledge graph embedding algorithm that is suitable for
implementation in neuromorphic hardware.Comment: Accepted for publication at the International Conference on
Neuromorphic Computing (ICNC 2021
One step backpropagation through time for learning input mapping in reservoir computing applied to speech recognition
Recurrent neural networks are very powerful engines for processing information that is coded in time, however, many problems with common training algorithms, such as Backpropagation Through Time, remain. Because of this, another important learning setup known as Reservoir Computing has appeared in recent years, where one uses an essentially untrained network to perform computations. Though very successful in many applications, using a random network can be quite inefficient when considering the required number of neurons and the associated computational costs. In this paper we introduce a highly simplified version of Backpropagation Through Time by basically truncating the error backpropagation to one step back in time, and we combine this with the classic Reservoir Computing setup using an instantaneous linear readout. We apply this setup to a spoken digit recognition task and show it to give very good results for small networks
Simple, Efficient and Convenient Decentralized Multi-Task Learning for Neural Networks
Artificial intelligence relying on machine learning is increasingly used on small, personal, network-connected devices such as smartphones and vocal assistants, and these applications will likely evolve with the development of the Internet of Things. The learning process requires a lot of data, often real users’ data, and computing power. Decentralized machine learning can help to protect users’ privacy by keeping sensitive training data on users’ devices, and has the potential to alleviate the cost born by service providers by off-loading some of the learning effort to user devices. Unfortunately, most approaches proposed so far for distributed learning with neural network are mono-task, and do not transfer easily to multi-tasks problems, for which users seek to solve related but distinct learning tasks and the few existing multi-task approaches have serious limitations. In this paper, we propose a novel learning method for neural networks that is decentralized, multitask, and keeps users’ data local. Our approach works with different learning algorithms, on various types of neural networks. We formally analyze the convergence of our method, and we evaluateits efficiency in different situations on various kind of neural networks, with different learning algorithms, thus demonstrating its benefits in terms of learning quality and convergence
Deep Learning Methods for Partial Differential Equations and Related Parameter Identification Problems
Recent years have witnessed a growth in mathematics for deep learning--which
seeks a deeper understanding of the concepts of deep learning with mathematics
and explores how to make it more robust--and deep learning for mathematics,
where deep learning algorithms are used to solve problems in mathematics. The
latter has popularised the field of scientific machine learning where deep
learning is applied to problems in scientific computing. Specifically, more and
more neural network architectures have been developed to solve specific classes
of partial differential equations (PDEs). Such methods exploit properties that
are inherent to PDEs and thus solve the PDEs better than standard feed-forward
neural networks, recurrent neural networks, or convolutional neural networks.
This has had a great impact in the area of mathematical modeling where
parametric PDEs are widely used to model most natural and physical processes
arising in science and engineering. In this work, we review such methods as
well as their extensions for parametric studies and for solving the related
inverse problems. We equally proceed to show their relevance in some industrial
applications
- …