2,657 research outputs found
Feature Distilled Tracking
Feature extraction and representation is one of the most important components for fast, accurate, and robust visual tracking. Very deep convolutional neural networks (CNNs) provide effective tools for feature extraction with good generalization ability. However, extracting features using very deep CNN models needs high performance hardware due to its large computation complexity, which prohibits its extensions in real-time applications. To alleviate this problem, we aim at obtaining small and fast-to-execute shallow models based on model compression for visual tracking. Specifically, we propose a small feature distilled network (FDN) for tracking by imitating the intermediate representations of a much deeper network. The FDN extracts rich visual features with higher speed than the original deeper network. To further speed-up, we introduce a shift-and-stitch method to reduce the arithmetic operations, while preserving the spatial resolution of the distilled feature maps unchanged. Finally, a scale adaptive discriminative correlation filter is learned on the distilled feature for visual tracking to handle scale variation of the target. Comprehensive experimental results on object tracking benchmark datasets show that the proposed approach achieves 5x speed-up with competitive performance to the state-of-the-art deep trackers
Design and implementation of image based object recognition
The aim of this paper is to design and implement image based object recognition. This represents more of a challenge when speaking of advance object recognition systems. A practical example of this issue is the recognition of objects in images. This is a task that humans can perform very well, but convolutional neural network systems don’t struggle to perform. AlexNet pre-trained model was used for the training the dataset because of it trouble-free architecture on very large scale dataset “Cifar-10” using R2019a Matlab. The dataset was split with the ratio of 70% for training and 30% for the testing part. This has prompted convolutional neural network to start experimenting with networks architectures as well as new algorithms to train them. This research paper presents an approach to train networks such as to improve their robustness to the recognition of object images on R2019a Matlab. This training strategy is then evaluated for designed AlexNet network architecture. The result of the study was that the training algorithm could improve robustness to different image recognition at the expense of an increase in performance for the performance of images of objects (i.e. Dog, Frog, Deer, Automobile, Airplane etc.) with high accuracy of recognition. When the advantages of different types of architectures were evaluated, it was found that accuracy of all object recognition were around 98% based on the image. It followed the findings from classical object recognition that feed-forward neural networks could perform as well their high accuracy of recognition
Texture features based microscopic image classification of liver cellular granuloma using artificial neural networks
Automated classification of Schistosoma mansoni granulomatous microscopic images of mice liver using Artificial Intelligence (AI) technologies is a key issue for accurate diagnosis and treatment. In this paper, three grey difference statistics-based features, namely three Gray-Level Co-occurrence Matrix (GLCM) based features and fifteen Gray Gradient Co-occurrence Matrix (GGCM) features were calculated by correlative analysis. Ten features were selected for three-level cellular granuloma classification using a Scaled Conjugate Gradient Back-Propagation Neural Network (SCG-BPNN) in the same performance. A cross-entropy is then calculated to evaluate the proposed Sigmoid input and the ten-hidden layer network. The results depicted that SCG-BPNN with texture features performs high recognition rate compared to using morphological features, such as shape, size, contour, thickness and other geometry-based features for the classification. The proposed method also has a high accuracy rate of 87.2% compared to the Back-Propagation Neural Network (BPNN), Back-Propagation Hopfield Neural Network (BPHNN) and Convolutional Neural Network (CNN)
- …