562 research outputs found

    Visualizing classification of natural video sequences using sparse, hierarchical models of cortex.

    Get PDF
    Recent work on hierarchical models of visual cortex has reported state-of-the-art accuracy on whole-scene labeling using natural still imagery. This raises the question of whether the reported accuracy may be due to the sophisticated, non-biological back-end supervised classifiers typically used (support vector machines) and/or the limited number of images used in these experiments. In particular, is the model classifying features from the object or the background? Previous work (Landecker, Brumby, et al., COSYNE 2010) proposed tracing the spatial support of a classifier’s decision back through a hierarchical cortical model to determine which parts of the image contributed to the classification, compared to the positions of objects in the scene. In this way, we can go beyond standard measures of accuracy to provide tools for visualizing and analyzing high-level object classification. We now describe new work exploring the extension of these ideas to detection of objects in video sequences of natural scenes

    A Review of Object Detection Models based on Convolutional Neural Network

    Full text link
    Convolutional Neural Network (CNN) has become the state-of-the-art for object detection in image task. In this chapter, we have explained different state-of-the-art CNN based object detection models. We have made this review with categorization those detection models according to two different approaches: two-stage approach and one-stage approach. Through this chapter, it has shown advancements in object detection models from R-CNN to latest RefineDet. It has also discussed the model description and training details of each model. Here, we have also drawn a comparison among those models.Comment: 17 pages, 11 figures, 1 tabl

    Support Vector Machine Histogram: New Analysis and Architecture Design Method of Deep Convolutional Neural Network

    Get PDF
    Deep convolutional neural network (DCNN) is a kind of hierarchical neural network models and attracts attention in recent years since it has shown high classification performance. DCNN can acquire the feature representation which is a parameter indicating the feature of the input by learning. However, its internal analysis and the design of the network architecture have many unclear points and it cannot be said that it has been sufficiently elucidated. We propose the novel DCNN analysis method “Support vector machine (SVM) histogram” as a prescription to deal with these problems. This is a method that examines the spatial distribution of DCNN extracted feature representation by using the decision boundary of linear SVM. We show that we can interpret DCNN hierarchical processing using this method. In addition, by using the result of SVM histogram, DCNN architecture design becomes possible. In this study, we designed the architecture of the application to large scale natural image dataset. In the result, we succeeded in showing higher accuracy than the original DCNN

    Temporal - spatial recognizer for multi-label data

    Get PDF
    Pattern recognition is an important artificial intelligence task with practical applications in many fields such as medical and species distribution. Such application involves overlapping data points which are demonstrated in the multi- label dataset. Hence, there is a need for a recognition algorithm that can separate the overlapping data points in order to recognize the correct pattern. Existing recognition methods suffer from sensitivity to noise and overlapping points as they could not recognize a pattern when there is a shift in the position of the data points. Furthermore, the methods do not implicate temporal information in the process of recognition, which leads to low quality of data clustering. In this study, an improved pattern recognition method based on Hierarchical Temporal Memory (HTM) is proposed to solve the overlapping in data points of multi- label dataset. The imHTM (Improved HTM) method includes improvement in two of its components; feature extraction and data clustering. The first improvement is realized as TS-Layer Neocognitron algorithm which solves the shift in position problem in feature extraction phase. On the other hand, the data clustering step, has two improvements, TFCM and cFCM (TFCM with limit- Chebyshev distance metric) that allows the overlapped data points which occur in patterns to be separated correctly into the relevant clusters by temporal clustering. Experiments on five datasets were conducted to compare the proposed method (imHTM) against statistical, template and structural pattern recognition methods. The results showed that the percentage of success in recognition accuracy is 99% as compared with the template matching method (Featured-Based Approach, Area-Based Approach), statistical method (Principal Component Analysis, Linear Discriminant Analysis, Support Vector Machines and Neural Network) and structural method (original HTM). The findings indicate that the improved HTM can give an optimum pattern recognition accuracy, especially the ones in multi- label dataset

    Visual pattern recognition using neural networks

    Get PDF
    Neural networks have been widely studied in a number of fields, such as neural architectures, neurobiology, statistics of neural network and pattern classification. In the field of pattern classification, neural network models are applied on numerous applications, for instance, character recognition, speech recognition, and object recognition. Among these, character recognition is commonly used to illustrate the feature and classification characteristics of neural networks. In this dissertation, the theoretical foundations of artificial neural networks are first reviewed and existing neural models are studied. The Adaptive Resonance Theory (ART) model is improved to achieve more reasonable classification results. Experiments in applying the improved model to image enhancement and printed character recognition are discussed and analyzed. We also study the theoretical foundation of Neocognitron in terms of feature extraction, convergence in training, and shift invariance. We investigate the use of multilayered perceptrons with recurrent connections as the general purpose modules for image operations in parallel architectures. The networks are trained to carry out classification rules in image transformation. The training patterns can be derived from user-defmed transformations or from loading the pair of a sample image and its target image when the prior knowledge of transformations is unknown. Applications of our model include image smoothing, enhancement, edge detection, noise removal, morphological operations, image filtering, etc. With a number of stages stacked up together we are able to apply a series of operations on the image. That is, by providing various sets of training patterns the system can adapt itself to the concatenated transformation. We also discuss and experiment in applying existing neural models, such as multilayered perceptron, to realize morphological operations and other commonly used imaging operations. Some new neural architectures and training algorithms for the implementation of morphological operations are designed and analyzed. The algorithms are proven correct and efficient. The proposed morphological neural architectures are applied to construct the feature extraction module of a personal handwritten character recognition system. The system was trained and tested with scanned image of handwritten characters. The feasibility and efficiency are discussed along with the experimental results

    Advancements in Image Classification using Convolutional Neural Network

    Full text link
    Convolutional Neural Network (CNN) is the state-of-the-art for image classification task. Here we have briefly discussed different components of CNN. In this paper, We have explained different CNN architectures for image classification. Through this paper, we have shown advancements in CNN from LeNet-5 to latest SENet model. We have discussed the model description and training details of each model. We have also drawn a comparison among those models.Comment: 9 pages, 15 figures, 3 Tables. Submitted to 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks(ICRCICN 2018
    corecore