5 research outputs found

    From neural-based object recognition toward microelectronic eyes

    Get PDF
    Engineering neural network systems are best known for their abilities to adapt to the changing characteristics of the surrounding environment by adjusting system parameter values during the learning process. Rapid advances in analog current-mode design techniques have made possible the implementation of major neural network functions in custom VLSI chips. An electrically programmable analog synapse cell with large dynamic range can be realized in a compact silicon area. New designs of the synapse cells, neurons, and analog processor are presented. A synapse cell based on Gilbert multiplier structure can perform the linear multiplication for back-propagation networks. A double differential-pair synapse cell can perform the Gaussian function for radial-basis network. The synapse cells can be biased in the strong inversion region for high-speed operation or biased in the subthreshold region for low-power operation. The voltage gain of the sigmoid-function neurons is externally adjustable which greatly facilitates the search of optimal solutions in certain networks. Various building blocks can be intelligently connected to form useful industrial applications. Efficient data communication is a key system-level design issue for large-scale networks. We also present analog neural processors based on perceptron architecture and Hopfield network for communication applications. Biologically inspired neural networks have played an important role towards the creation of powerful intelligent machines. Accuracy, limitations, and prospects of analog current-mode design of the biologically inspired vision processing chips and cellular neural network chips are key design issues

    Clustering by means of a Boltzmann machine with partial constraint satisfaction

    Get PDF
    The clustering problem refers to the partitioning of target sightings into sets. Two sightings are in the same set if and only if they are generated by sensor detections of the same target and are in the same great circle arc (GARC) trajectory of that target. A Boltzmann machine is developed whose sparse architecture provides for only partial constraint satisfaction of the associated cost function. This together with a special graphics interface serve as an aid in determining GARCs. Our approach differs from others in that the neural net is built to operate in conjunction with a non-neural tracker. This further restricts the architectural complexity of the network and facilitates future experimentation regarding decomposition of the neural net across several Von Neumann processors. Also, the Boltzmann machine architecture eases the effort of finding optimal or near optimal solutions. Results are presented. The demonstrated feasibility of neural GARC determination encourages investigation into the extension of its role in the track formation process utilizing an environment that includes supercomputers, neurocomputers, or optical hardware. The network architecture is capable of identifying a host of geometric forms other than GARCs and can thus be used in several domains including space, land, and ocean

    Clustering by means of a Boltzmann machine with partial constraint satisfaction

    Get PDF
    The clustering problem refers to the partitioning of target sightings into sets. Two sightings are in the same set if and only if they are generated by sensor detections of the same target and are in the same great circle arc (GARC) trajectory of that target. A Boltzmann machine is developed whose sparse architecture provides for only partial constraint satisfaction of the associated cost function. This together with a special graphics interface serve as an aid in determining GARCs. Our approach differs from others in that the neural net is built to operate in conjunction with a non-neural tracker. This further restricts the architectural complexity of the network and facilitates future experimentation regarding decomposition of the neural net across several Von Neumann processors. Also, the Boltzmann machine architecture eases the effort of finding optimal or near optimal solutions. Results are presented. The demonstrated feasibility of neural GARC determination encourages investigation into the extension of its role in the track formation process utilizing an environment that includes supercomputers, neurocomputers, or optical hardware. The network architecture is capable of identifying a host of geometric forms other than GARCs and can thus be used in several domains including space, land, and ocean

    A Decade of Neural Networks: Practical Applications and Prospects

    Get PDF
    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization

    Computational issues in process optimisation using historical data.

    Get PDF
    This thesis presents a new generic approach to improve the computational efficiency of neural-network-training algorithms and investigates the applicability of its 'learning from examples'' featured in improving the performance of a current intelligent diagnostic system. The contribution of this thesis is summarised in the following two points: For the first time in the literature, it has been shown that significant improvements in the computational efficiency of neural-network algorithms can be achieved using the proposed methodology based on using adaptive-gain variation. The capabilities of the current Knowledge Hyper-surface method (Meghana R. Ransing, 2002) are enhanced to overcome its existing limitations in modelling an exponential increase in the shape of the hyper-surface. Neural-network techniques, particularly back-propagation algorithms, have been widely used as a tool for discovering a mapping function between a known set of input and output examples. Neural networks learn from the known example set by adjusting its internal parameters, referred to as weights, using an optimisation procedure based on the 'least square fit principle'. The optimisation procedure normally involves thousands of iterations to converge to an acceptable solution. Hence, improving the computational efficiency of a neural-network algorithm is an active area of research. Various options for improving the computational efficiency of neural networks have been reviewed in this thesis. It has been shown in the existing literature that the variation of the gain parameter improves the learning efficiency of the gradient-descent method. However, it can be concluded from previous researchers' claims that the adaptive-gain variation improved the learning rate and hence the efficiency. It was discovered in this thesis that the gain variation has no influence on the learning rate; however, it actually influences the search direction. This made it possible to develop a novel approach that modifies the gradient-search direction by introducing the adaptive-gain variation. The proposed method is robust and has been shown that it can easily be implemented in all commonly used gradient- based optimisation algorithms. It has also been shown that it significantly improves the computational efficiency as compared to existing neural-network training algorithms. Computer simulations on a number of benchmark problems are used throughout to illustrate the improvement proposed in this thesis. In a foundry a large amount of data is generated within the foundry every time a casting is poured. Furthermore, with the increased number of computing tools and power there is a need to develop an efficient, intelligent diagnostic tool that can learn from the historical data to gain further insight into cause and effect relationships. In this study the performance of the current Knowledge Hyper-surface method was reviewed and the mathematical formulation of the current Knowledge Hyper-surface method was analysed to identify its limitations. An enhancement is proposed by introducing mid-points in the existing shape formulation. It is shown that the midpoints' shape function can successfully constrain the shape of decision hyper-surface to become more realistic with an acceptable result in a multi-dimensional case. This is a novel and original approach and is of direct relevance to the foundry industry
    corecore