6 research outputs found

    PithaNet: A transfer learning-based approach for traditional pitha classification

    Get PDF
    Pitha, pithe, or peetha are all Bangla words referring to a native and traditional food of Bangladesh as well as some areas of India, especially the parts of India where Bangla is the primary language. Numerous types of pithas exist in the culture and heritage of the Bengali and Bangladeshi people. Pithas are traditionally prepared and offered on important occasions in Bangladesh, such as welcoming a bride grooms, or bride, entertaining guests, or planning a special gathering of family, relatives, or friends. The traditional pitha celebration and pitha culture are no longer widely practiced in modern civilization. Consequently, the younger generation is unfamiliar with our traditional pitha culture. In this study, an effective pitha image classification system is introduced. convolutional neural network (CNN) pre-trained models EfficientNetB6, ResNet50, and VGG16 are used to classify the images of pitha. The dataset of traditional popular pithas is collected from different parts of Bangladesh. In this experiment, EfficientNetB6 and ResNet50 show nearly 90% accuracy. The best classification result was obtained using VGG16 with 92% accuracy. The main motive of this study is to revive the Bengali pitha tradition among young people and people worldwide, which will encourage many other researchers to pursue research in this domain

    Deep learning for an automated image-based stem cell classification

    Get PDF
    Hematopoiesis is a process in which hematopoietic stem cells produce other mature blood cells in the bone marrow through cell proliferation and differentiation. The hematopoietic cells are cultured on a petri dish to form a different colony-forming unit (CFU). The idea is to identify the type of CFU produced by the stem cell. Several software has been developed to classify the CFU automatically. However, an automated identification or classification of CFU types has become the main challenge. Most of the current software has common drawbacks, such as the expensive operating cost and complex machines. The purpose of this study is to investigate several selected convolutional neural network (CNN) pre-trained models to overcome these constraints for automated CFU classification. Prior to CFU classification, the images are acquired from mouse stem cells and categorized into three types which are CFU-erythroid (E), CFU-granulocyte/macrophage (GM) and CFU-PreB. These images are then pre-processed before being fed into CNN pre-trained models. The models adopt a deep learning neural network approach to extract informative features from the CFU images Classification performance shows that the models integrated with the pre-processing module can classify the CFUs with high accuracies and shorter computational time with 96.33% on 61 minutes and 37 seconds, respectively. Hence, this work finding could be used as the baseline reference for further research.Keywords: Automated stem cell classification; Colony-forming unit (CFU); Deep learning; Convolutional neural network (CNN) Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi, MalaysiabOptometry and Vision Sciences Programme, Faculty of Health Sciences, School of Healthcare Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia*proliferation and differentiation. The hematopoietic cells are cultured on a petri dish to form a different colony-forming unit (CFU). The idea is to identify the type of CFU produced by the stem cell. Several software has been developed to classify the CFU automatically. However, an automated identification or classification of CFU types has become the main challenge. Most of the current software has common drawbacks, such as the expensive operating cost and complex machines. The purpose of this study is to investigate several selected convolutional neural network (CNN) pre-trained models to overcome these constraints for automated CFU classification. Prior to CFU classification, the images are acquired from mouse stem cells and categorized into three types which are CFU-erythroid (E), CFU-granulocyte/macrophage (GM) and CFU-PreB. These images are then pre-processed before being fed into CNN pre-trained models. The models adopt a deep learning neural network approach to extract informative features from the CFU images Classification performance shows that the models integrated with the pre-processing module can classify the CFUs with high accuracies and shorter computational time with 96.33% on 61 minutes and 37 seconds, respectively. Hence, this work finding could be used as the baseline reference for further research

    Evaluation of deep learning against conventional limit equilibrium methods for slope stability analysis

    Get PDF
    This paper presents a comparison study between methods of deep learning as a new cat-egory of slope stability analysis, built upon the recent advances in artificial intelligence and conventional limit equilibrium analysis methods. For this purpose, computer code was developed to cal-culate the factor of safety (FS) using four limit equilibrium methods: Bishop’s simplified method, the Fellenius method, Janbu’s simplified method, and Janbu’s corrected method. The code was ver-ified against Slide2 in RocScience. Subsequently, the average FS values were used to approximate the “true” FS of the slopes for labeling the images for deep learning. Using this code, a comprehensive dataset of slope images with wide ranges of geometries and soil properties was created. The average FS values were used to label the images for implementing two deep learning models: a multiclass classification and a regression model. After training, the deep learning models were used to predict the FS of an independent set of slope images. Finally, the performance of the models was compared to that of the conventional methods. This study found that deep learning methods can reach accuracies as high as 99.71% while improving computational efficiency by more than 18 times compared with conventional methods

    Comparison of convolutional neural network models for food image classification

    No full text
    According to some estimates of World Health Organization, in 2014, more than 1.9 billion adults were overweight. About 13% of the world’s adult population were obese. 39% of adults were overweight. The worldwide prevalence of obesity more than doubled between 1980 and 2014. Nowadays, mobile applications recording food intake of people become popular. If an improved food classification system is introduced, users take the photo of their meals and system classifies photos into the categories. Hence, we proposed a deep convolutional neural network structure trained from scratch and compared its performance with pre-trained structures Alexnet and Caffenet in INISTA 2017. This study is the extended version of it. Three different deep convolutional neural networks were trained from scratch by using different learning methods: stochastic gradient descent, Nesterov’s accelerated gradient and Adaptive Moment Estimation, and compared with Alexnet and Caffenet fine-tuned with the same learning algorithms. Train, validation and test datasets were generated from Food11 and Food101 datasets. All tests were implemented through NVIDIA Digit interface on GeForce GTX1070. According to the test results, although pre-trained models provided better results than proposed structures, their performances were comparable. Moreover, learning optimization methods accelerated and improved the performances of all the compared models

    Comparison of convolutional neural network models for food image classification

    No full text
    2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2017 --3 July 2017 through 5 July 2017 -- --According to some estimates of World Health Organization (WHO), in 2014, more than 1.9 billion adults aged 18 years and older were overweight. Overall, about 13% of the world's adult population (11% of men and 15% of women) were obese. 39% of adults aged 18 years and over (38% of men and 40% of women) were overweight. The worldwide prevalence of obesity more than doubled between 1980 and 2014. The purpose of this study is to design a convolutional neural network model and provide a food dataset collection to distinguish the nutrition groups which people take in daily life. For this aim, both two pretrained models Alexnet and Caffenet were finetuned and a similar structure was trained with dataset. Food images were generated from Food-11, FooDD, Food100 datasets and web archives. According to the test results, finetuned models provided better results than trained structure as expected. However, trained model can be improved by using more training examples and can be used as specific structure for classification of nutrition groups. © 2017 IEEE
    corecore