379 research outputs found
Automatic analysis of electronic drawings using neural network
Neural network technique has been found to be a powerful tool in pattern recognition. It captures associations or discovers regularities with a set of patterns, where the types, number of variables or diversity of the data are very great, the relationships between variables are vaguely understood, or the relationships are difficult to describe adequately with conventional approaches.
In this dissertation, which is related to the research and the system design aiming at recognizing the digital gate symbols and characters in electronic drawings, we have proposed: (1) A modified Kohonen neural network with a shift-invariant capability in pattern recognition; (2) An effective approach to optimization of the structure of the back-propagation neural network; (3) Candidate searching and pre-processing techniques to facilitate the automatic analysis of the electronic drawings.
An analysis and the system performance reveal that when the shift of an image pattern is not large, and the rotation is only by nx90°, (n = 1, 2, and 3), the modified Kohonen neural network is superior to the conventional Kohonen neural network in terms of shift-invariant and limited rotation-invariant capabilities. As a result, the dimensionality of the Kohonen layer can be reduced significantly compared with the conventional ones for the same performance. Moreover, the size of the subsequent neural network, say, back-propagation feed-forward neural network, can be decreased dramatically.
There are no known rules for specifying the number of nodes in the hidden layers of a feed-forward neural network. Increasing the size of the hidden layer usually improves the recognition accuracy, while decreasing the size generally improves generalization capability. We determine the optimal size by simulation to attain a balance between the accuracy and generalization. This optimized back-propagation neural network outperforms the conventional ones designed by experience in general.
In order to further reduce the computation complexity and save the calculation time spent in neural networks, pre-processing techniques have been developed to remove long circuit lines in the electronic drawings. This made the candidate searching more effective
Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain
Recommended from our members
Neural network techniques for position and scale invariant image classification
This research is concerned with the application of neural network techniques to the problems of classifying images in a manner that is invariant to changes in position and scale. In addition to the goal of invariant classification, the network has to classify the objects in a hierarchical manner, in which complex features are constructed from simpler features, and use unsupervised learning. The resultant hierarchical structure should be able to classify the image by having an internal representation that models the structure of the image.
After finding existing neural network techniques unsuitable, a new type of neural network was developed that differed from the conventional multi-layer perceptron type of architecture. This network was constructed from neurons that were grouped into feature detectors.These neurons were taught in an unsupervised manner that used a technique based on Kohonen learning.A number of novel techniques were developed to improve the learning and classification performance of the network.
The network was able to retain the spatial relationship of the classified features; this inherent property resulted in the capability for position and scale invariant classification. As a consequence, an additional invariance filter was not required. In addition to achieving the invariance property, the developed techniques enabled multiple objects in an image to be classified.
When the network had learned the spatial relationships between the lower level features, names could be assigned to the identified features. As part of the classification process, th e system was able to identify the positions of the classified features in all layers of the network.
A software model of an artificial retina was used to test the grey scale classification performance of the network and to assess the response of the retina to changes in brightness.
Like the Neocognitron, the resulting network was developed solely for image classification. Although the Neocognitron is not designed for scale or position invariance, it was chosen for comparison purposes because it has structural similarities and the ability to accommodates light changes in the image.
This type of network could be used as the basis for a 2D-scene analysis neural network, in which the inherent parallelism of the neural network would provide simultaneous classification of the objects in the image
Artificial neural networks for image recognition : a study of feature extraction methods and an implementation for handwritten character recognition.
Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 1996.The use of computers for digital image recognition has become quite widespread.
Applications include face recognition, handwriting interpretation and fmgerprint analysis.
A feature vector whose dimension is much lower than the original image data is used to
represent the image. This removes redundancy from the data and drastically cuts the
computational cost of the classification stage. The most important criterion for the
extracted features is that they must retain as much of the discriminatory information
present in the original data. Feature extraction methods which have been used with neural
networks are moment invariants, Zernike moments, Fourier descriptors, Gabor filters and
wavelets. These together with the Neocognitron which incorporates feature extraction
within a neural network architecture are described and two methods, Zernike moments and
the Neocognitron are chosen to illustrate the role of feature extraction in image recognition
A Taxonomy of Deep Convolutional Neural Nets for Computer Vision
Traditional architectures for solving computer vision problems and the degree
of success they enjoyed have been heavily reliant on hand-crafted features.
However, of late, deep learning techniques have offered a compelling
alternative -- that of automatically learning problem-specific features. With
this new paradigm, every problem in computer vision is now being re-examined
from a deep learning perspective. Therefore, it has become important to
understand what kind of deep networks are suitable for a given problem.
Although general surveys of this fast-moving paradigm (i.e. deep-networks)
exist, a survey specific to computer vision is missing. We specifically
consider one form of deep networks widely used in computer vision -
convolutional neural networks (CNNs). We start with "AlexNet" as our base CNN
and then examine the broad variations proposed over time to suit different
applications. We hope that our recipe-style survey will serve as a guide,
particularly for novice practitioners intending to use deep-learning techniques
for computer vision.Comment: Published in Frontiers in Robotics and AI (http://goo.gl/6691Bm
Visual pattern recognition using neural networks
Neural networks have been widely studied in a number of fields, such as neural architectures, neurobiology, statistics of neural network and pattern classification. In the field of pattern classification, neural network models are applied on numerous applications, for instance, character recognition, speech recognition, and object recognition. Among these, character recognition is commonly used to illustrate the feature and classification characteristics of neural networks.
In this dissertation, the theoretical foundations of artificial neural networks are first reviewed and existing neural models are studied. The Adaptive Resonance Theory (ART) model is improved to achieve more reasonable classification results. Experiments in applying the improved model to image enhancement and printed character recognition are discussed and analyzed. We also study the theoretical foundation of Neocognitron in terms of feature extraction, convergence in training, and shift invariance.
We investigate the use of multilayered perceptrons with recurrent connections as the general purpose modules for image operations in parallel architectures. The networks are trained to carry out classification rules in image transformation. The training patterns can be derived from user-defmed transformations or from loading the pair of a sample image and its target image when the prior knowledge of transformations is unknown. Applications of our model include image smoothing, enhancement, edge detection, noise removal, morphological operations, image filtering, etc. With a number of stages stacked up together we are able to apply a series of operations on the image. That is, by providing various sets of training patterns the system can adapt itself to the concatenated transformation. We also discuss and experiment in applying existing neural models, such as multilayered perceptron, to realize morphological operations and other commonly used imaging operations.
Some new neural architectures and training algorithms for the implementation of morphological operations are designed and analyzed. The algorithms are proven correct and efficient. The proposed morphological neural architectures are applied to construct the feature extraction module of a personal handwritten character recognition system. The system was trained and tested with scanned image of handwritten characters. The feasibility and efficiency are discussed along with the experimental results
Integration of traditional imaging, expert systems, and neural network techniques for enhanced recognition of handwritten information
Includes bibliographical references (p. 33-37).Research supported by the I.F.S.R.C. at M.I.T.Amar Gupta, John Riordan, Evelyn Roman
- …