1,811 research outputs found

    Development of neural units with higher-order synaptic operations and their applications to logic circuits and control problems

    Get PDF
    Neural networks play an important role in the execution of goal-oriented paradigms. They offer flexibility, adaptability and versatility, so that a variety of approaches may be used to meet a specific goal, depending upon the circumstances and the requirements of the design specifications. Development of higher-order neural units with higher-order synaptic operations will open a new window for some complex problems such as control of aerospace vehicles, pattern recognition, and image processing. The neural models described in this thesis consider the behavior of a single neuron as the basic computing unit in neural information processing operations. Each computing unit in the network is based on the concept of an idealized neuron in the central nervous system (CNS). Most recent mathematical models and their architectures for neuro-control systems have generated many theoretical and industrial interests. Recent advances in static and dynamic neural networks have created a profound impact in the field of neuro-control. Neural networks consisting of several layers of neurons, with linear synaptic operation, have been extensively used in different applications such as pattern recognition, system identification and control of complex systems such as flexible structures, and intelligent robotic systems. The conventional linear neural models are highly simplified models of the biological neuron. Using this model, many neural morphologies, usually referred to as multilayer feedforward neural networks (MFNNs), have been reported in the literature. The performance of the neurons is greatly affected when a layer of neurons are implemented for system identification, pattern recognition and control problems. Through simulation studies of the XOR logic it was concluded that the neurons with linear synaptic operation are limited to only linearly separable forms of pattern distribution. However, they perform a variety of complex mathematical operations when they are implemented in the form of a network structure. These networks suffer from various limitations such as computational efficiency and learning capabilities and moreover, these models ignore many salient features of the biological neurons such as time delays, cross and self correlations, and feedback paths which are otherwise very important in the neural activity. In this thesis an effort is made to develop new mathematical models of neurons that belong to the class of higher-order neural units (HONUs) with higher-order synaptic operations such as quadratic and cubic synaptic operations. The advantage of using this type of neural unit is associated with performance of the neurons but the performance comes at the cost of exponential increase in parameters that hinders the speed of the training process. In this context, a novel method of representation of weight parameters without sacrificing the neural performance has been introduced. A generalised representation of the higher-order synaptic operation for these neural structures was proposed. It was shown that many existing neural structures can be derived from this generalized representation of the higher-order synaptic operation. In the late 1960’s, McCulloch and Pitts modeled the stimulation-response of the primitive neuron using the threshold logic. Since then, it has become a practice to implement the logic circuits using neural structures. In this research, realization of the logic circuits such as OR, AND, and XOR were implemented using the proposed neural structures. These neural structures were also implemented as neuro-controllers for the control problems such as satellite attitude control and model reference adaptive control. A comparative study of the performance of these neural structures compared to that of the conventional linear controllers has been presented. The simulation results obtained in this research were applicable only for the simplified model presented in the simulation studies

    Toward Abstraction from Multi-modal Data: Empirical Studies on Multiple Time-scale Recurrent Models

    Full text link
    The abstraction tasks are challenging for multi- modal sequences as they require a deeper semantic understanding and a novel text generation for the data. Although the recurrent neural networks (RNN) can be used to model the context of the time-sequences, in most cases the long-term dependencies of multi-modal data make the back-propagation through time training of RNN tend to vanish in the time domain. Recently, inspired from Multiple Time-scale Recurrent Neural Network (MTRNN), an extension of Gated Recurrent Unit (GRU), called Multiple Time-scale Gated Recurrent Unit (MTGRU), has been proposed to learn the long-term dependencies in natural language processing. Particularly it is also able to accomplish the abstraction task for paragraphs given that the time constants are well defined. In this paper, we compare the MTRNN and MTGRU in terms of its learning performances as well as their abstraction representation on higher level (with a slower neural activation). This was done by conducting two studies based on a smaller data- set (two-dimension time sequences from non-linear functions) and a relatively large data-set (43-dimension time sequences from iCub manipulation tasks with multi-modal data). We conclude that gated recurrent mechanisms may be necessary for learning long-term dependencies in large dimension multi-modal data-sets (e.g. learning of robot manipulation), even when natural language commands was not involved. But for smaller learning tasks with simple time-sequences, generic version of recurrent models, such as MTRNN, were sufficient to accomplish the abstraction task.Comment: Accepted by IJCNN 201

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    An Analysis of the Connections Between Layers of Deep Neural Networks

    Full text link
    We present an analysis of different techniques for selecting the connection be- tween layers of deep neural networks. Traditional deep neural networks use ran- dom connection tables between layers to keep the number of connections small and tune to different image features. This kind of connection performs adequately in supervised deep networks because their values are refined during the training. On the other hand, in unsupervised learning, one cannot rely on back-propagation techniques to learn the connections between layers. In this work, we tested four different techniques for connecting the first layer of the network to the second layer on the CIFAR and SVHN datasets and showed that the accuracy can be im- proved up to 3% depending on the technique used. We also showed that learning the connections based on the co-occurrences of the features does not confer an advantage over a random connection table in small networks. This work is helpful to improve the efficiency of connections between the layers of unsupervised deep neural networks

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Neural units with higher-order synaptic operations with applications to edge detection and control systems

    Get PDF
    The biological sense organ contains infinite potential. The artificial neural structures have emulated the potential of the central nervous system; however, most of the researchers have been using the linear combination of synaptic operation. In this thesis, this neural structure is referred to as the neural unit with linear synaptic operation (LSO). The objective of the research reported in this thesis is to develop novel neural units with higher-order synaptic operations (HOSO), and to explore their potential applications. The neural units with quadratic synaptic operation (QSO) and cubic synaptic operation (CSO) are developed and reported in this thesis. A comparative analysis is done on the neural units with LSO, QSO, and CSO. It is to be noted that the neural units with lower order synaptic operations are the subsets of the neural units with higher-order synaptic operations. It is found that for much more complex problems the neural units with higher-order synaptic operations are much more efficient than the neural units with lower order synaptic operations. Motivated by the intensity of the biological neural systems, the dynamic nature of the neural structure is proposed and implemented using the neural unit with CSO. The dynamic structure makes the system response relatively insensitive to external disturbances and internal variations in system parameters. With the success of these dynamic structures researchers are inclined to replace the recurrent (feedback) neural networks (NNs) in their present systems with the neural units with CSO. Applications of these novel dynamic neural structures are gaining potential in the areas of image processing for the machine vision and motion controls. One of the machine vision emulations from the biological attribution is edge detection. Edge detection of images is a significant component in the field of computer vision, remote sensing and image analysis. The neural units with HOSO do replicate some of the biological attributes for edge detection. Further more, the developments in robotics are gaining momentum in neural control applications with the introduction of mobile robots, which in turn use the neural units with HOSO; a CCD camera for the vision is implemented, and several photo-sensors are attached on the machine. In summary, it was demonstrated that the neural units with HOSO present the advanced control capability for the mobile robot with neuro-vision and neuro-control systems

    Embodied neuromorphic intelligence

    Full text link
    The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies – from perception to motor control – represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations
    • …
    corecore