243 research outputs found
Optimal use of computing equipment in an automated industrial inspection context
This thesis deals with automatic defect detection. The objective was to develop the techniques required by a small manufacturing business to make cost-efficient use of inspection technology. In our work on inspection techniques we discuss image acquisition and the choice between custom and general-purpose processing hardware. We examine the classes of general-purpose computer available and study popular operating systems in detail. We highlight the advantages of a hybrid system interconnected via a local area network and develop a sophisticated suite of image-processing software based on it. We quantitatively study the performance of elements of the TCP/IP networking protocol suite and comment on appropriate protocol selection for parallel distributed applications. We implement our own distributed application based on these findings. In our work on inspection algorithms we investigate the potential uses of iterated function series and Fourier transform operators when preprocessing images of defects in aluminium plate acquired using a linescan camera. We employ a multi-layer perceptron neural network trained by backpropagation as a classifier. We examine the effect on the training process of the number of nodes in the hidden layer and the ability of the network to identify faults in images of aluminium plate. We investigate techniques for introducing positional independence into the network's behaviour. We analyse the pattern of weights induced in the network after training in order to gain insight into the logic of its internal representation. We conclude that the backpropagation training process is sufficiently computationally intensive so as to present a real barrier to further development in practical neural network techniques and seek ways to achieve a speed-up. Weconsider the training process as a search problem and arrive at a process involving multiple, parallel search "vectors" and aspects of genetic algorithms. We implement the system as the mentioned distributed application and comment on its performance
Recommended from our members
Object-oriented analysis and design of computational intelligence systems
Machine learning from data, neuro-fuzzy information processing, approximate reasoning and genetic and evolutionary computation are all aspects of computational intelligence (also called soft computing methods). Soft computing methods differ from conventional computing in that they are tolerant of imprecision, uncertainty and partial truths. These characteristics can be exploited to achieve tractability, robustness and low solution costs when the solution to a complex (in machine terms) problem is required. The principal constituents of soft computing include: Neural Networks, Fuzzy Logic and Probabilistic Reasoning Systems. Genetic Algorithms (GAs), Evolutionary Algorithms, Chaos Theory', Complexity Theory and parts of Learning Theory all come under Probabilistic Reasoning Systems. Hybrid systems can be designed incorporating 2 or more aspects of soft computing that are more powerful than any of the components used in a stand alone fashion. A unified framework is needed to implement and manipulate such systems. Such a framework will allow for easy visualisation of the underlying concepts and easy modification of the resulting computer models. In this thesis, an investigation of the major aspects of computational intelligence has been carried out. The main emphasis has been placed on developing an object-oriented framework for architecting computational intelligence systems. Object models for Neural Networks, Fuzzy Logic Systems and Evolutionary Computation systems have been developed. Software has been written in C++ to realise sample implementations of the various systems. Finally, practical applications and the results of using the Neural Networks, Fuzzy Logic systems and Genetic Algorithms developed in solving real world problems are presented. A consistent notation based on the Object Modelling Technique (OMT) is used throughout the thesis to describe the software architectures from which the computer implementation models have been derived
Study of the Application of Neural Networks in Internet Traffic Engineering
In this study, we showed various approachs implemented in Artiļ¬cial Neural Networks for network
resources management and Internet congestion control. Through a training process, Neural Networks can
determine nonlinear relationships in a data set by associating the corresponding outputs to input patterns.
Therefore, the application of these networks to Trafļ¬c Engineering can help achieve its general objective:
āintelligentā agents or systems capable of adapting dataļ¬ow according to available resources. In this article, we
analyze the opportunity and feasibility to apply Artiļ¬cial Neural Networks to a number of tasks related to Trafļ¬c
Engineering. In previous sections, we present the basics of each one of these disciplines, which are associated to
Artiļ¬cial Intelligence and Computer Networks respectively
A Decade of Neural Networks: Practical Applications and Prospects
The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization
The hardware implementation of an artificial neural network using stochastic pulse rate encoding principles
In this thesis the development of a hardware artificial neuron device and artificial neural network using stochastic pulse rate encoding principles is considered. After a review of neural network architectures and algorithmic approaches suitable for hardware implementation, a critical review of hardware techniques which have been considered in analogue and digital systems is presented. New results are presented demonstrating the potential of two learning schemes which adapt by the use of a single reinforcement signal. The techniques for computation using stochastic pulse rate encoding are presented and extended with new novel circuits relevant to the hardware implementation of an artificial neural network. The generation of random numbers is the key to the encoding of data into the stochastic pulse rate domain. The formation of random numbers and multiple random bit sequences from a single PRBS generator have been investigated. Two techniques, Simulated Annealing and Genetic Algorithms, have been applied successfully to the problem of optimising the configuration of a PRBS random number generator for the formation of multiple random bit sequences and hence random numbers. A complete hardware design for an artificial neuron using stochastic pulse rate encoded signals has been described, designed, simulated, fabricated and tested before configuration of the device into a network to perform simple test problems. The implementation has shown that the processing elements of the artificial neuron are small and simple, but that there can be a significant overhead for the encoding of information into the stochastic pulse rate domain. The stochastic artificial neuron has the capability of on-line weight adaption. The implementation of reinforcement schemes using the stochastic neuron as a basic element are discussed
The Shallow and the Deep:A biased introduction to neural networks and old school machine learning
The Shallow and the Deep is a collection of lecture notes that offers an accessible introduction to neural networks and machine learning in general. However, it was clear from the beginning that these notes would not be able to cover this rapidly changing and growing field in its entirety. The focus lies on classical machine learning techniques, with a bias towards classification and regression. Other learning paradigms and many recent developments in, for instance, Deep Learning are not addressed or only briefly touched upon.Biehl argues that having a solid knowledge of the foundations of the field is essential, especially for anyone who wants to explore the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, The Shallow and the Deep places emphasis on fundamental concepts and theoretical background. This also involves delving into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. These notes aim to demystify machine learning and neural networks without losing the appreciation for their impressive power and versatility
- ā¦