962 research outputs found
Towards a scalable and efficient data classification technique.
Data Classification is a task that could be found in many life activities. In general, the term could be used for any activity that derives some decision or forecast based on the currently available information. Using a more accurate definition, a classification procedure is the construction of some kind of a method for making judgments for a continuing sequence of cases, where each new case must be assigned to one of pre-defined classes. This type of construction has been termed supervised learning, in order to distinguish it from unsupervised learning or clustering in which the classes are not pre-defined but are concluded from the available data. This thesis is divided into five chapters, analyzing three classification techniques, namely nearest neighbor technique, perceptron learning algorithm and multi-layer perceptrons with backpropagation, based on performance and scalability issues. Chapter one gives an introduction to the research topic of this thesis. In addition it states the problem that builds the core of this thesis and predefines the objective of this study, namely selecting the most efficient and scalable classification algorithm that suits a given classification task. Chapter two explores a historical review of the literature introduced in the classification domain. It focuses mainly on the topics that are related to this study and presents some of the new classification approaches. Chapter three introduces the way based on which this thesis is designed. The technical methodology used to analyze and investigate the three classification algorithms is clearly described. In this thesis different experiments are introduced to prove the findings. The datasets used here are considered to be real-life datasets that present sports players and cars classification tasks. Chapters four and five represent the main core of this thesis, as they contain the data analysis, main findings and conclusions that are derived from different experiments. The nearest neighbor classification technique is one of the lazy learners because before the classification process starts, it needs to store all of the training samples. But, although it takes more time to classify any unknown samples, it is considered the most efficient technique amont other classification techniques. A natural and future step would be using the single-layer perception algorithm that does not need to store the data samples to reach an acceptable convergence rate. Alternatively, it speeds the recognition or the learning process, because it learns and stores only the weights of the neural network used to implement the algorithm. This algorithm has a big deficiency: it only works for the linearly separable data samples. So, it is now a suitable phase to start working on a more scalable and efficient technique. It is the multi-layer perceptrons network with backpropagation that has the power of solving different complex and non-linearly separable classification tasks
Training Methods for Shunting Inhibitory Artificial Neural Networks
This project investigates a new class of high-order neural networks called shunting inhibitory artificial neural networks (SIANN\u27s) and their training methods. SIANN\u27s are biologically inspired neural networks whose dynamics are governed by a set of coupled nonlinear differential equations. The interactions among neurons are mediated via a nonlinear mechanism called shunting inhibition, which allows the neurons to operate as adaptive nonlinear filters. The project\u27s main objective is to devise training methods, based on error backpropagation type of algorithms, which would allow SIANNs to be trained to perform feature extraction for classification and nonlinear regression tasks. The training algorithms developed will simplify the task of designing complex, powerful neural networks for applications in pattern recognition, image processing, signal processing, machine vision and control. The five training methods adapted in this project for SIANN\u27s are error-backpropagation based on gradient descent (GD), gradient descent with variable learning rate (GDV), gradient descent with momentum (GDM), gradient descent with direct solution step (GDD) and APOLEX algorithm. SIANN\u27s and these training methods are implemented in MATLAB. Testing on several benchmarks including the parity problems, classification of 2-D patterns, and function approximation shows that SIANN\u27s trained using these methods yield comparable or better performance with multilayer perceptrons (MLP\u27s)
SuperSpike: Supervised learning in multi-layer spiking neural networks
A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns
- …