463 research outputs found

    Deep Neural Networks - A Brief History

    Full text link
    Introduction to deep neural networks and their history.Comment: 14 pages, 14 figure

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Нейросетевая технология распознавания объектов топологии интегральных микросхем

    Get PDF
    Рассмотрены методы обработки изображений и распознавания на основе нейронных сетей, ориентированные на применение в системах технического зрения проектирования интегральных микросхем. Представлена структура системы, реализующей нейросетевую технологию распознавания объектов топологии интегральных микросхем.Methods and algorithms for image processing and recognition based on neural network are considered , that are applied to computer vision systems for CAD of integrated circuits. A structure of a computer vision software system is proposed implemented neural network technology of Object Recognition on Integrated Circuits Layouts

    Neural Dataset Generality

    Full text link
    Often the filters learned by Convolutional Neural Networks (CNNs) from different datasets appear similar. This is prominent in the first few layers. This similarity of filters is being exploited for the purposes of transfer learning and some studies have been made to analyse such transferability of features. This is also being used as an initialization technique for different tasks in the same dataset or for the same task in similar datasets. Off-the-shelf CNN features have capitalized on this idea to promote their networks as best transferable and most general and are used in a cavalier manner in day-to-day computer vision tasks. It is curious that while the filters learned by these CNNs are related to the atomic structures of the images from which they are learnt, all datasets learn similar looking low-level filters. With the understanding that a dataset that contains many such atomic structures learn general filters and are therefore useful to initialize other networks with, we propose a way to analyse and quantify generality among datasets from their accuracies on transferred filters. We applied this metric on several popular character recognition, natural image and a medical image dataset, and arrived at some interesting conclusions. On further experimentation we also discovered that particular classes in a dataset themselves are more general than others.Comment: Long version of the paper accepted at IEEE International Conference on Image Processing 201

    Artificial neural networks for image recognition : a study of feature extraction methods and an implementation for handwritten character recognition.

    Get PDF
    Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 1996.The use of computers for digital image recognition has become quite widespread. Applications include face recognition, handwriting interpretation and fmgerprint analysis. A feature vector whose dimension is much lower than the original image data is used to represent the image. This removes redundancy from the data and drastically cuts the computational cost of the classification stage. The most important criterion for the extracted features is that they must retain as much of the discriminatory information present in the original data. Feature extraction methods which have been used with neural networks are moment invariants, Zernike moments, Fourier descriptors, Gabor filters and wavelets. These together with the Neocognitron which incorporates feature extraction within a neural network architecture are described and two methods, Zernike moments and the Neocognitron are chosen to illustrate the role of feature extraction in image recognition

    Automatic analysis of electronic drawings using neural network

    Get PDF
    Neural network technique has been found to be a powerful tool in pattern recognition. It captures associations or discovers regularities with a set of patterns, where the types, number of variables or diversity of the data are very great, the relationships between variables are vaguely understood, or the relationships are difficult to describe adequately with conventional approaches. In this dissertation, which is related to the research and the system design aiming at recognizing the digital gate symbols and characters in electronic drawings, we have proposed: (1) A modified Kohonen neural network with a shift-invariant capability in pattern recognition; (2) An effective approach to optimization of the structure of the back-propagation neural network; (3) Candidate searching and pre-processing techniques to facilitate the automatic analysis of the electronic drawings. An analysis and the system performance reveal that when the shift of an image pattern is not large, and the rotation is only by nx90°, (n = 1, 2, and 3), the modified Kohonen neural network is superior to the conventional Kohonen neural network in terms of shift-invariant and limited rotation-invariant capabilities. As a result, the dimensionality of the Kohonen layer can be reduced significantly compared with the conventional ones for the same performance. Moreover, the size of the subsequent neural network, say, back-propagation feed-forward neural network, can be decreased dramatically. There are no known rules for specifying the number of nodes in the hidden layers of a feed-forward neural network. Increasing the size of the hidden layer usually improves the recognition accuracy, while decreasing the size generally improves generalization capability. We determine the optimal size by simulation to attain a balance between the accuracy and generalization. This optimized back-propagation neural network outperforms the conventional ones designed by experience in general. In order to further reduce the computation complexity and save the calculation time spent in neural networks, pre-processing techniques have been developed to remove long circuit lines in the electronic drawings. This made the candidate searching more effective

    Deep learning model combination and regularization using convolutional neural networks

    Get PDF
    Convolutional neural networks (CNNs) were inspired by biology. They are hierarchical neural networks whose convolutional layers alternate with subsampling layers, reminiscent of simple and complex cells in the primary visual cortex [Fuk86a]. In the last years, CNNs have emerged as a powerful machine learning model and achieved the best results in many object recognition benchmarks [ZF13, HSK+12, LCY14, CMMS12]. In this dissertation, we introduce two new proposals for convolutional neural networks. The first, is a method to combine the output probabilities of CNNs which we call Weighted Convolutional Neural Network Ensemble. Each network has an associated weight that makes networks with better performance have a greater influence at the time to classify a pattern when compared to networks that performed worse. This new approach produces better results than the common method that combines the networks doing just the average of the output probabilities to make the predictions. The second, which we call DropAll, is a generalization of two well-known methods for regularization of fully-connected layers within convolutional neural networks, DropOut [HSK+12] and DropConnect [WZZ+13]. Applying these methods amounts to sub-sampling a neural network by dropping units. When training with DropOut, a randomly selected subset of the output layer’s activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods simultaneously. We show the validity of our proposals by improving the classification error on a common image classification benchmark
    corecore