83 research outputs found

    Application of backpropagation-like generative algorithms to various problems.

    Get PDF
    Thesis (M.Sc.)-University of Natal, Durban, 1992.Artificial neural networks (ANNs) were originally inspired by networks of biological neurons and the interactions present in networks of these neurons. The recent revival of interest in ANNs has again focused attention on the apparent ability of ANNs to solve difficult problems, such as machine vision, in novel ways. There are many types of ANNs which differ in architecture and learning algorithms, and the list grows annually. This study was restricted to feed-forward architectures and Backpropagation- like (BP-like) learning algorithms. However, it is well known that the learning problem for such networks is NP-complete. Thus generative and incremental learning algorithms, which have various advantages and to which the NP-completeness analysis used for BP-like networks may not apply, were also studied. Various algorithms were investigated and the performance compared. Finally, the better algorithms were applied to a number of problems including music composition, image binarization and navigation and goal satisfaction in an artificial environment. These tasks were chosen to investigate different aspects of ANN behaviour. The results, where appropriate, were compared to those resulting from non-ANN methods, and varied from poor to very encouraging

    Robustness and generalisation : tangent hyperplanes and classification trees

    Get PDF
    The issue of robust training is tackled for fixed multilayer feedforward architectures. Several researchers have proved the theoretical capabilities of Multilayer Feedforward networks but in practice the robust convergence of standard methods like standard backpropagation, conjugate gradient descent and Quasi-Newton methods may be poor for various problems. It is suggested that the common assumptions about the overall surface shape break down when many individual component surfaces are combined and robustness suffers accordingly. A new method to train Multilayer Feedforward networks is presented in which no particular shape is assumed for the surface and where an attempt is made to optimally combine the individual components of a solution for the overall solution. The method is based on computing Tangent Hyperplanes to the non-linear solution manifolds. At the core of the method is a mechanism to minimise the sum of squared errors and as such its use is not limited to Neural Networks. The set of tests performed for Neural Networks show that the method is very robust regarding convergence of training and has a powerful ability to find good directions in weight space. Generalisation is also a very important issue in Neural Networks and elsewhere. Neural Networks are expected to provide sensible outputs for unseen inputs. A framework for hyperplane based classifiers is presented for improving average generalisation. The framework attempts to establish a trained boundary so that there is an optimal overall spacing from the boundary to training points closest to this boundary. The framework is shown to provide results consistent with the theoretical expectations

    Small nets and short paths optimising neural computation

    Get PDF

    New Learning and Control Algorithms for Neural Networks.

    Get PDF
    Neural networks offer distributed processing power, error correcting capability and structural simplicity of the basic computing element. Neural networks have been found to be attractive for applications such as associative memory, robotics, image processing, speech understanding and optimization. Neural networks are self-adaptive systems that try to configure themselves to store new information. This dissertation investigates two approaches to improve performance: better learning and supervisory control. A new learning algorithm called the Correlation Continuous Unlearning (CCU) algorithm is presented. It is based on the idea of removing undesirable information that is encountered during the learning period. The control methods proposed in the dissertation improve the convergence by affecting the order of updates using a controller. Most previous studies have focused on monolithic structures. But it is known that the human brain has a bicameral nature at the gross level and it also has several specialized structures. In this dissertation, we investigate the computing characteristics of neural networks that are not monolithic being enhanced by a controller that can run algorithms that take advantage of the known global characteristics of the stored information. Such networks have been called bicameral neural networks. Stinson and Kak considered elementary bicameral models that used asynchronous control. New control methods, the method of iteration and bicameral classifier, are now proposed. The method of iteration uses the Hamming distance between the probe and the answer to control the convergence to a correct answer, whereas the bicameral classifier takes advantage of global characteristics using a clustering algorithm. The bicameral classifier is applied to two different models of equiprobable patterns as well as the more realistic situation where patterns can have different probabilities. The CCU algorithm has also been applied to a bidirectional associative memory with greatly improved performance. For multilayered networks, indexing of patterns to enhance system performance has been studied

    Micro-, Meso- and Macro-Dynamics of the Brain

    Get PDF
    Neurosciences, Neurology, Psychiatr

    Computational aspects of cellular intelligence and their role in artificial intelligence.

    Get PDF
    The work presented in this thesis is concerned with an exploration of the computational aspects of the primitive intelligence associated with single-celled organisms. The main aim is to explore this Cellular Intelligence and its role within Artificial Intelligence. The findings of an extensive literature search into the biological characteristics, properties and mechanisms associated with Cellular Intelligence, its underlying machinery - Cell Signalling Networks and the existing computational methods used to capture it are reported. The results of this search are then used to fashion the development of a versatile new connectionist representation, termed the Artificial Reaction Network (ARN). The ARN belongs to the branch of Artificial Life known as Artificial Chemistry and has properties in common with both Artificial Intelligence and Systems Biology techniques, including: Artificial Neural Networks, Artificial Biochemical Networks, Gene Regulatory Networks, Random Boolean Networks, Petri Nets, and S-Systems. The thesis outlines the following original work: The ARN is used to model the chemotaxis pathway of Escherichia coli and is shown to capture emergent characteristics associated with this organism and Cellular Intelligence more generally. The computational properties of the ARN and its applications in robotic control are explored by combining functional motifs found in biochemical network to create temporal changing waveforms which control the gaits of limbed robots. This system is then extended into a complete control system by combining pattern recognition with limb control in a single ARN. The results show that the ARN can offer increased flexibility over existing methods. Multiple distributed cell-like ARN based agents termed Cytobots are created. These are first used to simulate aggregating cells based on the slime mould Dictyostelium discoideum. The Cytobots are shown to capture emergent behaviour arising from multiple stigmergic interactions. Applications of Cytobots within swarm robotics are investigated by applying them to benchmark search problems and to the task of cleaning up a simulated oil spill. The results are compared to those of established optimization algorithms using similar cell inspired strategies, and to other robotic agent strategies. Consideration is given to the advantages and disadvantages of the technique and suggestions are made for future work in the area. The report concludes that the Artificial Reaction Network is a versatile and powerful technique which has application in both simulation of chemical systems, and in robotic control, where it can offer a higher degree of flexibility and computational efficiency than benchmark alternatives. Furthermore, it provides a tool which may possibly throw further light on the origins and limitations of the primitive intelligence associated with cells

    High Efficiency Real-Time Sensor and Actuator Control and Data Processing

    Get PDF
    The advances in sensor and actuator technology foster the use of large multitransducer networks in many different fields. The increasing complexity of such networks poses problems in data processing, especially when high-efficiency is required for real-time applications. In fact, multi-transducer data processing usually consists of interconnection and co-operation of several modules devoted to process different tasks. Multi-transducer network modules often include tasks such as control, data acquisition, data filtering interfaces, feature selection and pattern analysis. Heterogeneous techniques derived from chemometrics, neural networks, fuzzy-rules used to implement such tasks may introduce module interconnection and co-operation issues. To help dealing with these problems the author here presents a software library architecture for a dynamic and efficient management of multi-transducer data processing and control techniques. The framework’s base architecture and the implementation details of several extensions are described. Starting from the base models available in the framework core dedicated models for control processes and neural network tools have been derived. The Facial Automaton for Conveying Emotion (FACE) has been used as a test field for the control architecture

    SpiNNaker - A Spiking Neural Network Architecture

    Get PDF
    20 years in conception and 15 in construction, the SpiNNaker project has delivered the world’s largest neuromorphic computing platform incorporating over a million ARM mobile phone processors and capable of modelling spiking neural networks of the scale of a mouse brain in biological real time. This machine, hosted at the University of Manchester in the UK, is freely available under the auspices of the EU Flagship Human Brain Project. This book tells the story of the origins of the machine, its development and its deployment, and the immense software development effort that has gone into making it openly available and accessible to researchers and students the world over. It also presents exemplar applications from ‘Talk’, a SpiNNaker-controlled robotic exhibit at the Manchester Art Gallery as part of ‘The Imitation Game’, a set of works commissioned in 2016 in honour of Alan Turing, through to a way to solve hard computing problems using stochastic neural networks. The book concludes with a look to the future, and the SpiNNaker-2 machine which is yet to come
    • 

    corecore