4 research outputs found

    Efficient Microalgae Species Identification using Compact Convolutional Neural Network

    Get PDF
    In this study, we propose a novel approach for microscopic algae species classification by implementing a compact Convolutional Neural Network (CNN) model. Our methodology was tested on a diverse dataset consisting of 18 distinct species of microscopic algae, demonstrating a remarkable classification accuracy exceeding 99%. The outstanding performance of this model is attributed to its compact architecture which maintains high precision while minimizing computational resources, making it a feasible option for real-time applications. Furthermore, we incorporated advanced data augmentation techniques to enhance the generalization capability of our model. By artificially expanding the training dataset, we effectively increased the model's robustness to variance in input data, which significantly contributed to the model's high classification accuracy. The research findings underscore the potential of compact CNN models coupled with data augmentation strategies in high-precision microscopic algae classification tasks, paving the way for future innovations in the field of aquatic microbiology and environmental monitoring

    Directional Discrete Cosine Transform for Handwritten Script Identification

    Get PDF
    Authors' copy - ICDAR International Conference on Document Analysis and Recognition (2013), Washington DC, USAInternational audienceThis paper presents directional discrete cosine transforms (D-DCT) based word level handwritten script identification. The conventional discrete cosine transform (DCT) emphasizes vertical and horizontal energies of an image and de-emphasizes directional edge information, which of course plays a significant role in shape analysis problem, in particular. Conventional DCT however, is not efficient in characterizing the images where directional edges are dominant. In this paper, we investigate two different methods to capture directional edge information, one by performing 1D-DCT along left and right diagonals of an image, and another by decomposing 2D-DCT coefficients in left and right diagonals. The mean and standard deviations of left and right diagonals of DCT coefficients are computed and are used for the classification of words using linear discriminant analysis (LDA) and K-nearest neighbour (K-NN). We validate the method over 9000 words belonging to six different scripts. The classification of words is performed at bi-scripts, tri-scripts and multi-scripts scenarios and accomplished the identification accuracies respectively as 96.95%, 96.42% and 85.77% in average

    Automatic Classification of Desmids using Transfer Learning

    No full text
    This research paper presents a novel approach to classifying microscopic images of desmids using transfer learning and convolutional neural networks (CNNs). The purpose of this study was to automate the tedious task of manually classifying microscopic algae and improve our understanding of water quality in aquatic ecosystems. To accomplish this, we utilized transfer learning to fine-tune 13 pre-trained CNN models on a dataset of five categories of desmids. We evaluated the performance of our models using several metrics, including accuracy, precision, recall, and F1-score. Our results show that transfer learning can significantly improve the classification accuracy of microscopic images of desmids, and efficient CNN models can further enhance performance. The practical implications of this research include a more efficient and accurate method for classifying microscopic algae and assessing water quality. The theoretical implications include a better understanding of the application of transfer learning and CNNs in image classification. This research contributes to both theory and practice by providing a new method for automating the classification of microscopic algae and improving our understanding of aquatic ecosystem

    Directional Discrete Cosine Transform for Handwritten Script Identification

    No full text
    Abstract—This paper presents directional discrete cosine transforms (D-DCT) based word level handwritten script identification. The conventional discrete cosine transform (DCT) emphasizes vertical and horizontal energies of an image and de-emphasizes directional edge information, which of course plays a significant role in shape analysis problem, in particular. Conventional DCT however, is not efficient in characterizing the images where directional edges are dominant. In this paper, we investigate two different methods to capture directional edge information, one by performing 1D-DCT along left and right diagonals of an image, and another by decomposing 2D-DCT coefficients in left and right diagonals. The mean and standard deviations of left and right diagonals of DCT coefficients are computed and are used for the classification of words using linear discriminant analysis (LDA) and K-nearest neighbour (K-NN). We validate the method over 9000 words belonging to six different scripts. The classification of words is performed at biscripts, tri-scripts and multi-scripts scenarios and accomplished the identification accuracies respectively as 96.95%, 96.42 % and 85.77 % in average. I
    corecore