9 research outputs found
RCCNet: An Efficient Convolutional Neural Network for Histological Routine Colon Cancer Nuclei Classification
Efficient and precise classification of histological cell nuclei is of utmost
importance due to its potential applications in the field of medical image
analysis. It would facilitate the medical practitioners to better understand
and explore various factors for cancer treatment. The classification of
histological cell nuclei is a challenging task due to the cellular
heterogeneity. This paper proposes an efficient Convolutional Neural Network
(CNN) based architecture for classification of histological routine colon
cancer nuclei named as RCCNet. The main objective of this network is to keep
the CNN model as simple as possible. The proposed RCCNet model consists of only
1,512,868 learnable parameters which are significantly less compared to the
popular CNN models such as AlexNet, CIFARVGG, GoogLeNet, and WRN. The
experiments are conducted over publicly available routine colon cancer
histological dataset "CRCHistoPhenotypes". The results of the proposed RCCNet
model are compared with five state-of-the-art CNN models in terms of the
accuracy, weighted average F1 score and training time. The proposed method has
achieved a classification accuracy of 80.61% and 0.7887 weighted average F1
score. The proposed RCCNet is more efficient and generalized terms of the
training time and data over-fitting, respectively.Comment: Published in ICARCV 201
Writer identification in Indic scripts: a stroke distribution based approach
This paper proposes to represent an offline handwritten document with a distribution of strokes over an alphabet of strokes for writer identification. A data driven approach for stroke alphabet creation is done as follows: strokes are extracted from the image, using a regression method, extracted strokes are represented as fixed length vectors in a vector space, strokes are clustered into stroke categories to create a stroke alphabet. The paper proposes a clustering method with a new clustering score whereby an optimal number of clusters (categories) are automatically identified. For a given document, based on the frequency of occurrence of elements in the stroke alphabet, a histogram is created that represents the writer\u27s writing style. Support Vector Machine is used for the classification purpose. Offline handwritten documents written in two different Indic languages, viz., Telugu and Kannada, were considered for the experimentation. Results comparable to other methods in the literature are obtained from the proposed method
Deep Model Compression based on the Training History
Deep Convolutional Neural Networks (DCNNs) have shown promising results in
several visual recognition problems which motivated the researchers to propose
popular architectures such as LeNet, AlexNet, VGGNet, ResNet, and many more.
These architectures come at a cost of high computational complexity and
parameter storage. To get rid of storage and computational complexity, deep
model compression methods have been evolved. We propose a novel History Based
Filter Pruning (HBFP) method that utilizes network training history for filter
pruning. Specifically, we prune the redundant filters by observing similar
patterns in the L1-norms of filters (absolute sum of weights) over the training
epochs. We iteratively prune the redundant filters of a CNN in three steps.
First, we train the model and select the filter pairs with redundant filters in
each pair. Next, we optimize the network to increase the similarity between the
filters in a pair. It facilitates us to prune one filter from each pair based
on its importance without much information loss. Finally, we retrain the
network to regain the performance, which is dropped due to filter pruning. We
test our approach on popular architectures such as LeNet-5 on MNIST dataset and
VGG-16, ResNet-56, and ResNet-110 on CIFAR-10 dataset. The proposed pruning
method outperforms the state-of-the-art in terms of FLOPs reduction
(floating-point operations) by 97.98%, 83.42%, 78.43%, and 74.95% for LeNet-5,
VGG-16, ResNet-56, and ResNet-110 models, respectively, while maintaining the
less error rate