72,698 research outputs found

    An instruction systolic array architecture for multiple neural network types

    Get PDF
    Modern electronic systems, especially sensor and imaging systems, are beginning to incorporate their own neural network subsystems. In order for these neural systems to learn in real-time they must be implemented using VLSI technology, with as much of the learning processes incorporated on-chip as is possible. The majority of current VLSI implementations literally implement a series of neural processing cells, which can be connected together in an arbitrary fashion. Many do not perform the entire neural learning process on-chip, instead relying on other external systems to carry out part of the computation requirements of the algorithm. The work presented here utilises two dimensional instruction systolic arrays in an attempt to define a general neural architecture which is closer to the biological basis of neural networks - it is the synapses themselves, rather than the neurons, that have dedicated processing units. A unified architecture is described which can be programmed at the microcode level in order to facilitate the processing of multiple neural network types. An essential part of neural network processing is the neuron activation function, which can range from a sequential algorithm to a discrete mathematical expression. The architecture presented can easily carry out the sequential functions, and introduces a fast method of mathematical approximation for the more complex functions. This can be evaluated on-chip, thus implementing the entire neural process within a single system. VHDL circuit descriptions for the chip have been generated, and the systolic processing algorithms and associated microcode instruction set for three different neural paradigms have been designed. A software simulator of the architecture has been written, giving results for several common applications in the field

    NONLINEAR ADAPTIVE SIGNAL PROCESSING

    Get PDF
    Nonlinear techniques for signal processing and recognition have the promise of achieving systems which are superior to linear systems in a number of ways such as better performance in terms of accuracy, fault tolerance, resolution, highly parallel architectures and cloker similarity to biological intelligent systems. The nonlinear techniques proposed are in the form of multistage neural networks in which each stage can be a particular neural network and all the stages operate in parallel. The specific approach focused upon is the parallel, self-organizing, hierarchical neural networks (PSHNN\u27s). A new type of PSHNN is discussed such that the outputs are allowed to be continuous-valued. The perfo:rmance of the resulting networks is tested in problems of prediction of speech and of chaotic time-series. Three types of networks in which the stages are learned by the delta rule, sequential least-squares, and the backpropagation (BP) algolrithm, respectively, are described. In all cases studied, the new networks achieve better performarnce than linear prediction. This is shown both theoretically and experimentally. A revised BP algorithm is discussed for learning input nonlinearities. The advantage of the revised BP algorithm is that the PSHNN with revised BP stages can be extended to use the sequential leastsquares (SLS) or the least mean absolule value rule (LMAV) in the last stage. A forward-backward training algorithm for parallel, self-organizing hierarchical neural networks is described. Using linear algebra, it is shown that the forward-backward training of an n-stage PSHNN until convergence is equivalent to the pseudo-inverse solution for a single, total network designed in the leastsquares sense with the total input vector consisting of the actual input vector and its additional nonlinear transformations. These results are also valid when a single long input vector is partitioned into smaller length vectors. A number of advantages achieved are small modules for easy and fast learning, parallel implementation of small modules during testing, faster convergence rate, better numerical error-reduction, and suitability for learning input nonlinear transformations by the backpropagation algorithm. Better performance in terms of deeper minimum of the error function and faster convergence rate is achieved when a single BP network is replaced by a PSHNN of equal complexity in which each stage is a BP network of smaller complexity than the single BP network

    Recurrent Neural Networks Applied to GNSS Time Series for Denoising and Prediction

    Get PDF
    Global Navigation Satellite Systems (GNSS) are systems that continuously acquire data and provide position time series. Many monitoring applications are based on GNSS data and their efficiency depends on the capability in the time series analysis to characterize the signal content and/or to predict incoming coordinates. In this work we propose a suitable Network Architecture, based on Long Short Term Memory Recurrent Neural Networks, to solve two main tasks in GNSS time series analysis: denoising and prediction. We carry out an analysis on a synthetic time series, then we inspect two real different case studies and evaluate the results. We develop a non-deep network that removes almost the 50% of scattering from real GNSS time series and achieves a coordinate prediction with 1.1 millimeters of Mean Squared Error
    • …
    corecore